DevOps / Agile
"DevOps-tear-down-that-wall"
If like me you don't wan't to be sitting watching videos when you would rather be working -
the podcasts from Command Line Heroes could be for you.
Just put your headphones on and learn loads of things that you probably did not know.
I especially liked the podcast on Agile.
"Agile-revolution"
Put some magic into your database.
Click below for some magic - not really but with a title like that how could I resist.
Going back a long time but still does a job.
Oracle Tip: Put some magic into your database Discover how to check the magic value of a BLOB and return a MIME type and encoding information by directly examining the data.
Going back a long time but still does a job.
Oracle Tip: Put some magic into your database Discover how to check the magic value of a BLOB and return a MIME type and encoding information by directly examining the data.
ZFS Administration
Really handy facility on Solaris is the ability to use ZFS and take snapshot prior to do any work.
Brilliant if you have a training box and want to roll the course back to the start of the week once the student have wrecked the data.
the state is persistant across reboots)
Brilliant if you have a training box and want to roll the course back to the start of the week once the student have wrecked the data.
++++++++++++++++++++++++++++++ +++
rootpool/export/home 20G 18G 156M 100% /export/home
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rootpool 58.1G 216G 98K /rootpool
rootpool@20130611 18K - 95K -
rootpool/ROOT 17.9G 216G 18K legacy
rootpool/ROOT@20130611 0 - 18K -
rootpool/ROOT/s10s_u6wos_07b 87.4M 216G 13.3G /
rootpool/ROOT/s10s_u6wos_07b@ 20130611 0 - 13.3G -
rootpool/ROOT/sol10u10 17.8G 216G 12.0G /
rootpool/ROOT/sol10u10@ sol10u10 5.20G - 13.3G -
rootpool/ROOT/sol10u10@ 20130611 234M - 11.6G -
rootpool/export 20.0G 216G 22K /export
rootpool/export@20130611 1K - 20K -
rootpool/export@20130621 1K - 20K -
rootpool/export@20140223 18K - 22K -
rootpool/export@20140430 18K - 22K -
rootpool/export/home 20.0G 0 18.2G /export/home
rootpool/export/home@20130611 5.35M - 6.10G -
rootpool/export/home@20130621 5.33M - 6.10G -
rootpool/export/home@20140223 102M - 17.2G -
rootpool/export/home@20140430 8.27M - 18.7G -
rootpool/test 100M 216G 100M /rootpool/test
rootpool/test@20130611 16K - 100M -
rootpool/zonenfs 20.0G 216G 20.0G /rootpool/zonenfs
safefmepool 4.06G 15.5G 31K /safefmepool
safefmepool/safefme 4.06G 15.5G 4.06G /SAFEFME
# zfs destroy -r rootpool/export/home@20130611
# zfs destroy -r rootpool/export/home@20130621
Etc…
Now
rootpool/export/home 20G 18G 1.8G 91% /export/home
DO NOT RUN ZFS DESTROY WITHOUT THE SNAPSHOT NAME I.E. USING THE @ BIT – THIS IS VERY BAD!!!
If you have zfs snapshots running for very long periods of time they record changes so they can go back to the time they were created.
I reckon we could not remove the files or do anything as the zfs snapshot could “not keep up
++++++++++++++++++++++++++++++ +++++++++++++++=
Zfs swapfile
# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 181,1 8 2097144 2054536
zfs create –V 3G rpool/swap3
swap -a /dev/zvol/dsk/rpool/swap3
++++++++++++++++++++++++++++++ +++
root@whisky > zfs snapshot -r STORAGE/whisky01@jan19th2011
Removing a snapshot:
zfs destroy -r diskpool/niagra03@nov8th
Make SURE the @snapshot_name is specified or this will be BAD.
Adding a swap file to a zfs filesystem
zfs create -V 1gb rpool/extraswap
find
This will create a swap file called extraswap in the zpool rpool. You will need to identify where
this file has actually been created as it will be a link. Once you find the actual file you can add it with the following command :
swap -a /dev/zvol/dsk/rpool/extraswap
++++++++++++++++++++++++++++++ ++++++++++++++
ZPOOL CREATION AND MICELLANEOUS
------------------------------ -
zpool create tank c1t0d0 c1t1d0
zpool create tank mirror c1t0d0 c1t1d0
zpool create tank raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0
(can be done with slices but they have to be created first).
zpool create -n tank mirror c1t0d0 c2t0d0 (practice create)
zpool create -m /export/zfs home c1t0d0 (specify the mount point)
zpool add [-n] tank mirror c4t0d0 c5t0d0 (add extra space)
zpool attach tank c1t1d0 c2t0d0 (add extra space to existing stuff)
zpool detach tank c1t1d0
zpool offline [-t] tank c1t0d0 to replace a disk or make unavailable
zpool online tank c1t0d0
zpool clear tank (clear error messages in pool)
zpool clear tank c1t0d0 (clear error messages in disk)
zpool replace tank c1t0d0 c2d0s0 (replace 1st with 2nd HS)
zfs create tank/home/zfs set mountpoint=/export/zfs tank/home
zfs set sharenfs=on tank/home
zfs set comporession=on tank/home
get all tank/home
zfs create tank/home/psco0508
zfs set quota=5G tank/home/psco0508
root@irene.> zfs create diskpool/december01
root@irene > zfs list
NAME USED AVAIL REFER MOUNTPOINT
diskpool 372G 722G 4.88G /diskpool
diskpool/december01 31K 722G 31K /diskpool/december01
root@irene > zfs set mountpoint=/december01 diskpool/december01
Basic ZPOOL
zpool list
zpool list tank
zpool list -o name,size
zpool list -Ho name,size no headings and tabbed fields good for scripting)
I/O ZPOOL
zpool iostat (not very acurate)
zpool iostat tank 2 (every 2 seconds indefinately)
zpool iostat tank 2 3 (every 2 secs for 3 iterations)
zpool iostat -v (virtual devices as well as I/O)
Health ZPOOL
zpool status -x short output
zpool status -v tank
zpool destroy [-f] tank (BE CAREFUL)
zfs list
ZILs Adding and Removal
-----------------------
zpool status (to check what there)
zpool add diskpool log|cache c1t2d0 c1t3d0 (add a ZIL)
zpool remove diskpool c1t2d0 c1t3d0 (remove disks from a ZIL)
Migration
---------
zpool export [-f] tank export the pool (no longer visable and unmounts FS in pool)
zpool import get a list by default from /dev/dsk
zpool import -d /file look for exports in other directories
zpool import tank if several have the same name use the number to specify.
zpool import 6223921996255991199
zpool import dozer dozernew import and rename pool
DESTROYING and RECREATING
-------------------------
zpool destroy tank
zpool import -Df tank without -f it doesn't actually come back.
zpool status tank
It is possible to bring back a pool if a device is missing, so long as its not critical to the operation.
ZPOOL UPGRADE
-------------
zfs upgrade or zfs upgrade -v
If using RAID-Z then generally keep the number of disks in each group in single figures a this works better.
HOT SPARE SECTION
-----------------
zpool add disk_pool spare c1t14d0 add a disk to a pool as a hot_spare
zpool remove disk_pool c1t14d0 remove a disk from a pool as a hot_spare
++++++++++++++++++++++++++++++ ++++++++++++++
Recreate error when pool fills up :
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 21.6G 9.65G 42.5K /rpool
rpool/ROOT 7.21G 9.65G 31K legacy
rpool/ROOT/s10_1-13 7.21G 9.65G 7.21G /
rpool/dump 1.00G 9.65G 1.00G -
rpool/export 844M 9.65G 32K /export
rpool/export/home 844M 9.65G 844M /export/home
rpool/oracle 8.41G 9.65G 8.41G /oracle
rpool/swap 1.06G 9.71G 1.00G -
rpool/swap3 3.09G 9.74G 3.00G -
i.e. 9881.6 Meg
From the /export/home area
# mkfile 9600m bigfile
# zfs snapshot -r rpool/export/home@may23rd
Then fill up what is left
# mkfile 100m smallfile
# mkfile 100m smallfile2
# mkfile 77m smallfile3
# df -h
Filesystem size used avail capacity Mounted on
rpool/export/home 31G 10G 0K 100% /export/home
Nothing left
Remove all the files - zfs does not care about the 3 files after the snapshot but does need 9600m from before the snapshot
So we free up approx. 276M but we have removed 9.65Gig
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 31.0G 276M 42.5K /rpool
rpool/ROOT 7.21G 276M 31K legacy
rpool/ROOT/s10_1-13 7.21G 276M 7.21G /
rpool/dump 1.00G 278M 1.00G -
rpool/export 10.2G 276M 32K /export
rpool/export/home 10.2G 276M 844M /export/home
rpool/export/home@may23rd 9.38G - 10.2G -
rpool/oracle 8.41G 276M 8.41G /oracle
rpool/swap 1.06G 340M 1.00G -
rpool/swap3 3.09G 373M 3.00G -
Get rid of the snapshot
# zfs destroy -r rpool/export/home@may23rd
et voilla...
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 21.6G 9.65G 42.5K /rpool
rpool/ROOT 7.21G 9.65G 31K legacy
rpool/ROOT/s10_1-13 7.21G 9.65G 7.21G /
rpool/dump 1.00G 9.65G 1.00G -
rpool/export 844M 9.65G 32K /export
rpool/export/home 844M 9.65G 844M /export/home
rpool/oracle 8.41G 9.65G 8.41G /oracle
rpool/swap 1.06G 9.71G 1.00G -
rpool/swap3 3.09G 9.74G 3.00G -
Subscribe to:
Posts (Atom)