To configure the Beacon to recognize the Certificate Authority:
Obtain the Certificate of the Web Site's Certificate Authority, as follows:
In Microsoft Internet Explorer, connect to the HTTPS URL of the Web Site you are attempting to monitor.
Double-click the lock icon at the bottom of the browser screen, which indicates that you have connected to a secure Web site.
The browser displays the Certificate dialog box, which describes the Certificate used for this Web site. Other browsers offer a similar mechanism to view the Certificate detail of a Web Site.
Click the Certificate Path tab and select the first entry in the list of certificates as shown in Figure 2-7.
Click View Certificate to display a second Certificate dialog box.
Click the Details tab on the Certificate window.
Click Copy to File to display the Certificate Manager Export wizard.
In the Certificate Manager Export wizard, select Base64 encoded X.509 (.CER) as the format you want to export and save the certificate to a text file with an easily-identifiable name, such as beacon_certificate.cer.
Open the certificate file using your favorite text editor.
The content of the certificate file will look similar to the content shown in .
Update the list of Beacon Certificate Authorities, as follows:
Locate the b64InternetCertificate.txt file in the following directory of Agent Home of the Beacon host:
agent_home/sysman/config/
This file contains a list of Base64 Certificates.
Edit the b64InternetCertificate.txt file and add the contents of the Certificate file you just exported to the top of the file, taking care to include all the Base64 text of the Certificate including the BEGIN and END lines.
Restart the Management Agent.
After you restart the Management Agent, the Beacon detects your addition to the list of Certificate Authorities recognized by Beacon and you can successfully monitor the availability and performance of the secure Web site URL.
Wednesday, December 16, 2009
Wednesday, December 9, 2009
shutdown or reboot a Solaris Server
Reboot Commads:
shutdown -y -i6 -g0
or
sync;sync;init 6
or
reboot
Shutdown Commands:
shutdown -y -i5 -g0
or
sync;sync;init 5
or
poweroff
to Bring to OK prompt
init 0
or
shutdown –i0 –g0 –y
shutdown -y -i6 -g0
or
sync;sync;init 6
or
reboot
Shutdown Commands:
shutdown -y -i5 -g0
or
sync;sync;init 5
or
poweroff
to Bring to OK prompt
init 0
or
shutdown –i0 –g0 –y
Resetting a domain on E25K
setkeyswitch –d domain off
showkeyswitch –d domain
setobpparams -d domain 'auto-boot?=false'
showobpparams -d domain
setkeyswitch -d domain on
console -d domain
0k >
showkeyswitch –d domain
setobpparams -d domain 'auto-boot?=false'
showobpparams -d domain
setkeyswitch -d domain on
console -d domain
0k >
How to login on E25K console
ssh -l user e25kcontroller
$ sudo su -
# su - sms-svc
Password:
E25ksystem-controller:sms-svc:1> showplatform
> showboards
> console -d [ ABC ]
console login: root
password: **********
NOTE: The escape character is "~."
$ sudo su -
# su - sms-svc
Password:
E25ksystem-controller:sms-svc:1> showplatform
> showboards
> console -d [ ABC ]
console login: root
password: **********
NOTE: The escape character is "~."
Wednesday, August 19, 2009
Steps to clear if Mirror is maintenance state
Submirror 0: d31 State: Needs Maintenance Submirror 1: d32
metadetach d30 d32
metadetach -f d30 d32
metaclear d32
metastat -p
metainit -f d32 1 1 c0t1d0s5
metattach d30 d32
metastatgrep -i progress
metadetach d30 d32
metadetach -f d30 d32
metaclear d32
metastat -p
metainit -f d32 1 1 c0t1d0s5
metattach d30 d32
metastatgrep -i progress
Friday, August 14, 2009
Memory Errors after Firmware upgrade on T2000
On the T1000/T2000, when POST encounters a single CE, the associated DIMM is declared faulty and half of system's memory is deconfigured and unavailable for Solaris. Since PSH (Predictive Self-Healing) is the primary means for detecting errors and diagnosing faults on the Niagara platforms, this policy is too aggressive.
Steps to follow to resolve this issue ( Set diag level to minimum and reboot server)
sc> setsc diag_level min
sc> setsc diag_mode normal
sc>console -f
# shutdown -y -i6 -g0
Steps to follow to resolve this issue ( Set diag level to minimum and reboot server)
sc> setsc diag_level min
sc> setsc diag_mode normal
sc>console -f
# shutdown -y -i6 -g0
Thursday, August 13, 2009
/opt/SUNWexplo/bin/curl.sparc -T vmcore.0 https://supportfiles.sun.com/curl?file=CASE#-vmcore.0\&root=cores
If you want to upload a core files which are more than 2 GB, use above command
vmcore.0 - Local file name
CASE#-vmcore.0 - Destination file name
cores - Destination directory name
Choice of directories available on destination side.
* cores
* europe-cores/asouth/incoming
* europe-cores/ch/incoming
* europe-cores/de/incoming
* europe-cores/fr/incoming
* europe-cores/se/incoming
* europe-cores/uk/incoming
* iplanetcores
* explorer
* explorer-amer
* explorer-apac
* explorer-emea
If you want to upload a core files which are more than 2 GB, use above command
vmcore.0 - Local file name
CASE#-vmcore.0 - Destination file name
cores - Destination directory name
Choice of directories available on destination side.
* cores
* europe-cores/asouth/incoming
* europe-cores/ch/incoming
* europe-cores/de/incoming
* europe-cores/fr/incoming
* europe-cores/se/incoming
* europe-cores/uk/incoming
* iplanetcores
* explorer
* explorer-amer
* explorer-apac
* explorer-emea
Thursday, July 30, 2009
Clear Faults on Sun Fire2900
lom> showchs -b
lom>showchs -c /N0/SB0 -v
lom>service
lom[service]>setchs -s ok -r "Case#" -c /N0/RP2
lom[service]>setchs -s ok -r "Case#" -c /N0/SB0
Tuesday, July 21, 2009
Monday, July 13, 2009
Command to enable the disable component-Solaris
-> showcomponent
-> enablecomponent
-> clearasrdb
It will clear all disbled components
-> showcomponent
-> enablecomponent
-> clearasrdb
It will clear all disbled components
-> showcomponent
Friday, June 19, 2009
Below is SUN man page output for better understanding :
# fmdump -v -u d05a9f16-e969-4988-d340-dea1b54bd307TIME
UUID SUNW-MSG-IDAug 17 07:18:43.9562 d05a9f16-e969-4988-d340-dea1b54bd307 SUN4U-8000-2S100% fault.memory.dimmFRU: mem:///component=J0202rsrc: mem:///component=J0202
Once the DIMM(s) have been replaced, the command fmadm faulty should be run to see the status of the memory involved.
Example:
# fmadm faulty
STATE RESOURCE / UUID——– ———————————————————————-degraded mem:///component=J02029f32d247-869f-c9f2-949b-d9ec88b09640——– ———————————————————————-
Then fmadm repair should be run to remove the memory from the faulty list.
# fmadm repair mem:///component=J0202
fmadm: recorded repair to mem:///component=J0202
Verifying the results: After issuing the above commands, you can check the status of memory modules by issuing fmadm faulty command. this time it will not list any results.
# fmdump -v -u d05a9f16-e969-4988-d340-dea1b54bd307TIME
UUID SUNW-MSG-IDAug 17 07:18:43.9562 d05a9f16-e969-4988-d340-dea1b54bd307 SUN4U-8000-2S100% fault.memory.dimmFRU: mem:///component=J0202rsrc: mem:///component=J0202
Once the DIMM(s) have been replaced, the command fmadm faulty should be run to see the status of the memory involved.
Example:
# fmadm faulty
STATE RESOURCE / UUID——– ———————————————————————-degraded mem:///component=J02029f32d247-869f-c9f2-949b-d9ec88b09640——– ———————————————————————-
Then fmadm repair should be run to remove the memory from the faulty list.
# fmadm repair mem:///component=J0202
fmadm: recorded repair to mem:///component=J0202
Verifying the results: After issuing the above commands, you can check the status of memory modules by issuing fmadm faulty command. this time it will not list any results.
Tuesday, June 9, 2009
Friday, May 8, 2009
Steps for Installing Solaris patch cluster
1. Check all mirrors for "needs maintenance", and if there are any, "metasync" those.
2. "metadetach" the mirror
3. "init 0"
4. "OK> boot -s"
5. If necessary, "zfs mount -a" to get everything mounted in single-user
6. "install_cluster"
7. "init 0"
8. "boot -r"
9. "metattach" the mirror that was broken
2. "metadetach" the mirror
3. "init 0"
4. "OK> boot -s"
5. If necessary, "zfs mount -a" to get everything mounted in single-user
6. "install_cluster"
7. "init 0"
8. "boot -r"
9. "metattach" the mirror that was broken
Friday, May 1, 2009
Some ZPOOL commands
bash-3.00# /usr/sbin/zpool history
History for 'pool_name':
2009-05-01.15:14:12 zpool create pool_name c2t50060482D52D2E56d10 c2t50060482D52D2E56d11
2009-05-01.15:16:29 zfs create pool_name /u001
2009-05-01.15:16:33 zfs create pool_name /u003
2009-05-01.15:20:33 zfs set mountpoint=/u001 pool_name/u001
2009-05-01.15:24:21 zfs set compression=on pool_name/u001
2009-05-01.15:24:24 zfs set compression=on pool_name/u003
2009-05-01.15:25:11 zfs set mountpoint=/u003 pool_name/u003
2009-05-01.15:31:43 zfs set mountpoint=/u003 pool_name/u003
2009-05-01.15:32:06 zfs set mountpoint=/u003 pool_name/u003
2009-05-01.15:33:36 zfs set mountpoint=/mnt pool_name/u001
2009-05-01.15:33:42 zfs set mountpoint=/u001 pool_name/u001
2009-05-01.15:37:56 zfs set compression=off pool_name/u001
2009-05-01.15:37:57 zfs set compression=off pool_name/u003
No quotas, no compression
History for 'pool_name':
2009-05-01.15:14:12 zpool create pool_name c2t50060482D52D2E56d10 c2t50060482D52D2E56d11
2009-05-01.15:16:29 zfs create pool_name /u001
2009-05-01.15:16:33 zfs create pool_name /u003
2009-05-01.15:20:33 zfs set mountpoint=/u001 pool_name/u001
2009-05-01.15:24:21 zfs set compression=on pool_name/u001
2009-05-01.15:24:24 zfs set compression=on pool_name/u003
2009-05-01.15:25:11 zfs set mountpoint=/u003 pool_name/u003
2009-05-01.15:31:43 zfs set mountpoint=/u003 pool_name/u003
2009-05-01.15:32:06 zfs set mountpoint=/u003 pool_name/u003
2009-05-01.15:33:36 zfs set mountpoint=/mnt pool_name/u001
2009-05-01.15:33:42 zfs set mountpoint=/u001 pool_name/u001
2009-05-01.15:37:56 zfs set compression=off pool_name/u001
2009-05-01.15:37:57 zfs set compression=off pool_name/u003
No quotas, no compression
Sunday, April 19, 2009
Saturday, April 18, 2009
Core-Dump
"savecore -L" which produces a live core dump on the running system
reboot -d to reboot the box and get a core.
reboot -d to reboot the box and get a core.
some zoneadm commands
zoneadm list -cv
zoneadm list -iv
zoneadm list
zoneadm list -p
zoneadm -z myzone boot -- -m verbose
zoneadm -z myzone reboot -- -m verbose
zoneadm list -iv
zoneadm list
zoneadm list -p
zoneadm -z myzone boot -- -m verbose
zoneadm -z myzone reboot -- -m verbose
Thursday, April 16, 2009
Exporting Solaris zone configurations
(Got it from other site for my information)
I have been using Solaris 10 zone technology for the past 4 - 5 months, and just recently came across the zonecfg(am) “export” option. This option allows you to export the configuration from a specific zone, which can be used to recreate zones, or as a template when adding additional zones (with some adjustments of course). The following example prints the zone configuration for a domain called “irc”:
$ zonecfg -z irc exportcreate -b
set zonepath=/export/home/zone_irc
set autoboot=false
add net
set address=192.168.1.4
set physical=hme0
end
This is super useful, and can make creating 1000s of zones a snap!
I have been using Solaris 10 zone technology for the past 4 - 5 months, and just recently came across the zonecfg(am) “export” option. This option allows you to export the configuration from a specific zone, which can be used to recreate zones, or as a template when adding additional zones (with some adjustments of course). The following example prints the zone configuration for a domain called “irc”:
$ zonecfg -z irc exportcreate -b
set zonepath=/export/home/zone_irc
set autoboot=false
add net
set address=192.168.1.4
set physical=hme0
end
This is super useful, and can make creating 1000s of zones a snap!
Patching zones when they are attached to hosts
(Got this information from different post for my reference)
recently patched one of my Solaris 10 hosts, and decided to test out the zone update on attach functionality that is now part of Solaris 10 update 6. The update on attach feature allows detached zones to get patched when they are attached to a host, which can be rather handy if you are moving zones around your infrastructure. To test this functionality, I first detached a zone from the host I was going to patch:
$ zoneadm -z zone detach
$ zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- zone1 configured /zones/zone1 native shared
Once the zone was detached, I applied the latest Solaris patch bundle and rebooted the server. When the system came back up, I tried to attach the zone:
$ zoneadm -z zone1 attach
These patches installed on the source system are inconsistent with this system:
118668: version mismatch
(17) (19)
118669: version mismatch
(17) (19)
119060: version mismatch
(44) (45)
119091: version mismatch
(31) (32)
119214: version mismatch
(17) (18)
119247: version mismatch
(34) (35)
119253: version mismatch
(29) (31)
119255: version mismatch
(59) (65)
119314: version mismatch
(24) (26)
119758: version mismatch
(12) (14)
119784: version mismatch
(07) (10)
120095: version mismatch
(21) (22)
120200: version mismatch
(14) (15)
120223: version mismatch
(29) (31)
120273: version mismatch
(23) (25)
120411: version mismatch
(29) (30)
120544: version mismatch
(11) (14)
120740: version mismatch
(04) (05)
121119: version mismatch
(13) (15)
121309: version mismatch
(14) (16)
121395: version mismatch
(01) (03)
122213: version mismatch
(28) (32)
122912: version mismatch
(13) (15)
123896: version mismatch
(05) (10)
124394: version mismatch
(08) (09)
124629: version mismatch
(09) (10)
124631: version mismatch
(19) (24)
125165: version mismatch
(12) (13)
125185: version mismatch
(08) (11)
125333: version mismatch
(03) (05)
125540: version mismatch
(04) (06)
125720: version mismatch
(24) (28)
125732: version mismatch
(02) (04)
125953: version mismatch
(17) (18)
126364: version mismatch
(06) (07)
126366: version mismatch
(12) (14)
126420: version mismatch
(01) (02)
126539: version mismatch
(01) (02)
126869: version mismatch
(02) (03)
136883: version mismatch
(01) (02)
137122: version mismatch
(03) (06)
137128: version mismatch
(02) (05)
138224: version mismatch
(02) (03)
138242: version mismatch
(01) (05)
138254: version mismatch
(01) (02)
138264: version mismatch
(02) (03)
138286: version mismatch
(01) (02)
138372: version mismatch
(02) (06)
138628: version mismatch
(02) (07)
138857: version mismatch
(01) (02)
138867: version mismatch
(01) (02)
138882: version mismatch
(01) (02)
These patches installed on this system were not installed on the source system:
125556-02
138889-08
139100-01
139463-02
139482-01
139484-05
139499-04
139501-02
139561-02
139580-02
140145-01
140384-01
140456-01
140775-03
141009-01
141015-01
As you can see in the above output, the zone refused to attach because the zone patch database differed from the global zone patch database. To synchronize the two, I added the “-u” option (update the zone when it is attached to a host) to the zoneadm command line:
$ zoneadm -z zone1 attach -u
Getting the list of files to remove
Removing 1209 files
Remove 197 of 197 packages
Installing 1315 files
Add 197 of 197 packages
Updating editable files
The file within the zone contains a log of the zone update.
Once the zone was updated, I was able to boot the zone without issue:
$ zoneadm -z zone1 boot
$ zoneadm list -vc ID NAME STATUS PATH BRAND IP
0 global running / native shared
4 zone1 running /zones/zone1 native shared
recently patched one of my Solaris 10 hosts, and decided to test out the zone update on attach functionality that is now part of Solaris 10 update 6. The update on attach feature allows detached zones to get patched when they are attached to a host, which can be rather handy if you are moving zones around your infrastructure. To test this functionality, I first detached a zone from the host I was going to patch:
$ zoneadm -z zone detach
$ zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- zone1 configured /zones/zone1 native shared
Once the zone was detached, I applied the latest Solaris patch bundle and rebooted the server. When the system came back up, I tried to attach the zone:
$ zoneadm -z zone1 attach
These patches installed on the source system are inconsistent with this system:
118668: version mismatch
(17) (19)
118669: version mismatch
(17) (19)
119060: version mismatch
(44) (45)
119091: version mismatch
(31) (32)
119214: version mismatch
(17) (18)
119247: version mismatch
(34) (35)
119253: version mismatch
(29) (31)
119255: version mismatch
(59) (65)
119314: version mismatch
(24) (26)
119758: version mismatch
(12) (14)
119784: version mismatch
(07) (10)
120095: version mismatch
(21) (22)
120200: version mismatch
(14) (15)
120223: version mismatch
(29) (31)
120273: version mismatch
(23) (25)
120411: version mismatch
(29) (30)
120544: version mismatch
(11) (14)
120740: version mismatch
(04) (05)
121119: version mismatch
(13) (15)
121309: version mismatch
(14) (16)
121395: version mismatch
(01) (03)
122213: version mismatch
(28) (32)
122912: version mismatch
(13) (15)
123896: version mismatch
(05) (10)
124394: version mismatch
(08) (09)
124629: version mismatch
(09) (10)
124631: version mismatch
(19) (24)
125165: version mismatch
(12) (13)
125185: version mismatch
(08) (11)
125333: version mismatch
(03) (05)
125540: version mismatch
(04) (06)
125720: version mismatch
(24) (28)
125732: version mismatch
(02) (04)
125953: version mismatch
(17) (18)
126364: version mismatch
(06) (07)
126366: version mismatch
(12) (14)
126420: version mismatch
(01) (02)
126539: version mismatch
(01) (02)
126869: version mismatch
(02) (03)
136883: version mismatch
(01) (02)
137122: version mismatch
(03) (06)
137128: version mismatch
(02) (05)
138224: version mismatch
(02) (03)
138242: version mismatch
(01) (05)
138254: version mismatch
(01) (02)
138264: version mismatch
(02) (03)
138286: version mismatch
(01) (02)
138372: version mismatch
(02) (06)
138628: version mismatch
(02) (07)
138857: version mismatch
(01) (02)
138867: version mismatch
(01) (02)
138882: version mismatch
(01) (02)
These patches installed on this system were not installed on the source system:
125556-02
138889-08
139100-01
139463-02
139482-01
139484-05
139499-04
139501-02
139561-02
139580-02
140145-01
140384-01
140456-01
140775-03
141009-01
141015-01
As you can see in the above output, the zone refused to attach because the zone patch database differed from the global zone patch database. To synchronize the two, I added the “-u” option (update the zone when it is attached to a host) to the zoneadm command line:
$ zoneadm -z zone1 attach -u
Getting the list of files to remove
Removing 1209 files
Remove 197 of 197 packages
Installing 1315 files
Add 197 of 197 packages
Updating editable files
The file within the zone contains a log of the zone update.
Once the zone was updated, I was able to boot the zone without issue:
$ zoneadm -z zone1 boot
$ zoneadm list -vc ID NAME STATUS PATH BRAND IP
0 global running / native shared
4 zone1 running /zones/zone1 native shared
Wednesday, April 15, 2009
Wednesday, April 8, 2009
Attaching 3rd disk to a mirror
--> We can add 3rd disk which is SAN ior local and Wait for resilvering to complete. zpool attach zfspool c5t60060480000190100665533030304245d0
zpool status
mirror ONLINE 0 0 0
c1t0d0s4 ONLINE 0 0 0
c1t1d0s4 ONLINE 0 0 0
c5t60060480000190100665533030304245d0 ONLINE 0 0 0 71.5K resilvered
--> deattach 1 and 2 local disks
zpool detach zfspool c1t0d0s4
zpool detach zfspool c1t1d0s4
zpool status
mirror ONLINE 0 0 0
c1t0d0s4 ONLINE 0 0 0
c1t1d0s4 ONLINE 0 0 0
c5t60060480000190100665533030304245d0 ONLINE 0 0 0 71.5K resilvered
--> deattach 1 and 2 local disks
zpool detach zfspool c1t0d0s4
zpool detach zfspool c1t1d0s4
Add Additional storage in zpool
# zpool detach zpool-name c1t2d0
# cfgadm -c unconfigure c1::dsk/c1t2d0
Physically replace the disk with that of a higher capacity
# cfgadm -c configure c1::dsk/c1t2d0
# zpool attach zpool-name c1t3d0 c1t2d0
Wait for resilvering to complete…
# zpool detach zpool-name c1t3d0
# cfgadm -c unconfigure c1::dsk/c1t3d0
Physically replace the disk with that of a higher capacity
# cfgadm -c configure c1::dsk/c1t3d0
# zpool attach zpool-name c1t2d0 c1t3d0
# cfgadm -c unconfigure c1::dsk/c1t2d0
Physically replace the disk with that of a higher capacity
# cfgadm -c configure c1::dsk/c1t2d0
# zpool attach zpool-name c1t3d0 c1t2d0
Wait for resilvering to complete…
# zpool detach zpool-name c1t3d0
# cfgadm -c unconfigure c1::dsk/c1t3d0
Physically replace the disk with that of a higher capacity
# cfgadm -c configure c1::dsk/c1t3d0
# zpool attach zpool-name c1t2d0 c1t3d0
Tuesday, April 7, 2009
Wednesday, March 25, 2009
Command to check open ports
/usr/local/bin/lsof -P -z -i 2>/dev/null grep global grep LISTEN grep -v localhost
Note: There is a pipe before grep and after null. Somehow it is not printing in the output
Note: There is a pipe before grep and after null. Somehow it is not printing in the output
Adding Disks To Zpool
Re-scan the HBA, then create new disk device nodes:
cfgadm -c configure c5 ; devfsadm -C -v
Have a look to see if the disks actually showed up:
format
Look at the new disk device nodes:
ls -latr /dev/rdsk/*s2 grep "Oct 24"
...and let's just print the Solaris disk devices for use in the next step:
ls -latr /dev/rdsk/*s2 grep "Oct 24" awk '{print $9}'
Finally, add the disks to the pool:
zpool add ZPool-Name c5t60060480000190100665533032323438d0 \ c5t60060480000190100665533032323338d0 \ c5t60060480000190100665533032323238d0 \ c5t60060480000190100665533032323138d0 \ c5t60060480000190100665533032323038d0 \ c5t60060480000190100665533032314638d0 \ c5t60060480000190100665533032314538d0 \ c5t60060480000190100665533032314438d0 \ c5t60060480000190100665533032314338d0 \ c5t60060480000190100665533032314238d0 \ c5t60060480000190100665533032314138d0 \ c5t60060480000190100665533032313938d0 \ c5t60060480000190100665533032313838d0 \ c5t60060480000190100665533032313738d0 \ c5t60060480000190100665533032313638d0 \ c5t60060480000190100665533032313538d0
And admire our handiwork:
zpool status Zpool-name
zfs list grep Zpool-name
cfgadm -c configure c5 ; devfsadm -C -v
Have a look to see if the disks actually showed up:
format
Look at the new disk device nodes:
ls -latr /dev/rdsk/*s2 grep "Oct 24"
...and let's just print the Solaris disk devices for use in the next step:
ls -latr /dev/rdsk/*s2 grep "Oct 24" awk '{print $9}'
Finally, add the disks to the pool:
zpool add ZPool-Name c5t60060480000190100665533032323438d0 \ c5t60060480000190100665533032323338d0 \ c5t60060480000190100665533032323238d0 \ c5t60060480000190100665533032323138d0 \ c5t60060480000190100665533032323038d0 \ c5t60060480000190100665533032314638d0 \ c5t60060480000190100665533032314538d0 \ c5t60060480000190100665533032314438d0 \ c5t60060480000190100665533032314338d0 \ c5t60060480000190100665533032314238d0 \ c5t60060480000190100665533032314138d0 \ c5t60060480000190100665533032313938d0 \ c5t60060480000190100665533032313838d0 \ c5t60060480000190100665533032313738d0 \ c5t60060480000190100665533032313638d0 \ c5t60060480000190100665533032313538d0
And admire our handiwork:
zpool status Zpool-name
zfs list grep Zpool-name
How To Log Into E-25K
ssh -l (user-id) (system-control)
sudo su - (Sudo to root)
su - sms-svc (login to user who have access to system control)
console -d [ ABC ] (login to domains)
NOTE: The escape character is "~."
Controlling Weblogic Admin Server with Solaris SMF
--> Create Admin.xml under /var/svc/manifest/application/management :
--> in /etc/security/auth_attr, add (as root):
solaris.smf.manage.Admin/weblogic:::Admin Management::
--> and then run the usermod command (as root):
usermod -A solaris.smf.manage.Admin/weblogic weblogic
--> Run following to validate :
svccfg validate Admin.xml
--> Run following to import :
svccfg import Admin.xml
--> Checking status :
svcs -agrep weblogic
--> Enable it :
svcadm enable Admin/weblogic:default
--> Disable it
svcadm disable Admin/weblogic:default
--> in /etc/security/auth_attr, add (as root):
solaris.smf.manage.Admin/weblogic:::Admin Management::
--> and then run the usermod command (as root):
usermod -A solaris.smf.manage.Admin/weblogic weblogic
--> Run following to validate :
svccfg validate Admin.xml
--> Run following to import :
svccfg import Admin.xml
--> Checking status :
svcs -agrep weblogic
--> Enable it :
svcadm enable Admin/weblogic:default
--> Disable it
svcadm disable Admin/weblogic:default
Monday, March 23, 2009
format c20t60060480000190100665533032354438d0 (To verify disk layout)
metainit d151 1 1 /dev/rdsk/c20t60060480000190100665533032354438d0s0 (To create Meta Device)
newfs -i 8192 /dev/md/rdsk/d151 (To create new filesystem
mkdir /u039 (create a directory)
vi /etc/vfstab (Enter information in vfstab)
mount /u039 (Mount the filesystem)
metainit d151 1 1 /dev/rdsk/c20t60060480000190100665533032354438d0s0 (To create Meta Device)
newfs -i 8192 /dev/md/rdsk/d151 (To create new filesystem
mkdir /u039 (create a directory)
vi /etc/vfstab (Enter information in vfstab)
mount /u039 (Mount the filesystem)
zpool create c4t60060480000190100665533030394138d0
zfs set mountpoint=none
zfs create /apps
zfs create /homer
zfs destroy /homer
zfs create /home
zfs set zoned=on /apps
zfs set zoned=on /home
zfs create /dev
zfs set zoned=off /apps
zfs set zoned=off /home
zfs rename /apps /dev/apps
zfs rename /home /dev/home
zfs set quota=2G /dev/apps
zfs set mountpoint=none
zfs create
zfs create
zfs destroy
zfs create
zfs set zoned=on
zfs set zoned=on
zfs create
zfs set zoned=off
zfs set zoned=off
zfs rename
zfs rename
zfs set quota=2G
Subscribe to:
Posts (Atom)
Welcome to the UNIX world