Sunday, October 27, 2013

Removing Exadata Cell Security

Removing cell security on the cells is fairly simple. In order to remove cell security on the griddisks, we need to remove database-scoped security first, before removing asm-scoped security.

To implement cell security, please refer to my previous post Securing Exadata Machine

Steps to remove Database-Scoped Security:

Ø  Stop CRS & DB services

Ø  Remove database client’s name from the availableTo attributes from a set of griddisks
dcli –l root –g ~/cell_group “cellcli –e alter griddisk <griddisk_names> availableTo=’+asm’”

Ø  If no other griddisk is assigned to any other database client then we can remove the key assigned to a database.
dcli –l root –g ~/cell_group “cellcli –e assign key for <db_name>=’ ‘”

Ø  remove the cellkey.ora file from $ORACLE_HOME/admin/db_unique_name/pfile/ directory

Ø  Remove the database from the availableTo attribute
dcli –l root –g ~/cell_group “cellcli –e alter griddisk all availableTo=’+asm’

Ø  Restart CRS & DB services

Removing ASM-Scoped Security

ASM-Scoped security is removed once database-scoped security is removed from the cell storage. After removing asm-scoped security, there would be open-security for griddisks on cells.

Steps to remove ASM-Scoped Security:

Ø  Stop CRS & DB services

Ø  Remove Oracle ASM client’s name from the availableTo attributes from a set of griddisks
dcli –l root –g ~/cell_group “cellcli –e alter griddisk <griddisk_names> availableTo=’ ’”

Ø  If no other griddisk is assigned to any other ASM cluster client then we can remove the key assigned to a database.
dcli –l root –g ~/cell_group “cellcli –e assign key for <asm_cluster>=’ ‘”

Ø  remove the cellkey.ora file from /etc/oracle/cell/network-config/ directory from each compute nodes

Ø  Remove Oracle ASM fro`m the availableTo attribute
dcli –l root –g ~/cell_group “cellcli –e alter griddisk all availableTo=’ ’”

Ø  Restart CRS & DB services


Ref: 
Expert Oracle Exadata by Kerry Osborne, Randy Johnson, and Tanel Poder
Oracle Exadata Recipes by John Clarke

Bye,
Saurabh

Friday, October 25, 2013

Securing Exadata Machine

On Exadata, by default, all ASM and exadata databases have access to all griddisk on cell servers. This is called open-security mode. Hence, to implement cell security features on Exadata, so that ASMs and databases can be restricted to access only allowed grid disks on the cell servers, we have two levels of securities:

1. ASM-scoped Security
2. Database-scoped Security

ASM-Scoped Security

  • Access is restricted at Exadata grid disk level
  • Allows or restricts ASM instances to and from access specific Exadata grid disk
  • Allows isolating grid disk storage for separate clustered environment.
  • Benefits when you want your Exadata production storage environment to be completely separate from non-production environment and have requirement of patching activity on multiple GI environment on a single Exadata Machine.

Steps to create ASM-Scoped Security:
1. sqlplus ‘/as sysasm’

2. check ASM DB unique name
        show parameter unique

3. Stop CRS services on each compute nodes
        srvctl stop database –d <db_name>
        crsctl stop crs

4. connect to any Exadata cell server
        cellcli>create key

5. a. Copy the key, generated by above command
b. Create a cellkey.ora file at on one of the compute nodes and place it at home directory
        with GI owner user.
c. cellkey.ora file contains:
key=<generated_key>
asm=<ASM db_unique_name>
#realm=my_realm

6. Assign the key to ASM
        dcli –l root –g ~/cell_group “celllci –e assign key for ‘ASM’=’<key>’

7. Alter griddisk and make them available to the +ASM
        dcli –l root –g ~/cell_group “cellcli –e alter griddisk all availableTo=\’+ASM\’”
        In the above command, all the griddisk are allocated to +ASM. We can opt to choose only
       subsets of the grid disks.

8. Copy cellkey.ora file to /etc/oracle/cell/network-config folder on each compute node and
        set permission to 600.
        dcli –l root ~/dbs_group “cellcli –l grid –f /home/oracle/cellkey.ora –d \
        /etc/oracle/cell/network-config/’
        dcli –l root ~/dbs_group “cellcli –l grid –f chmod 600 /etc/oracle/cell/network-config/’

9. Start CRS on each compute node
        crsctl start crs
        This completes the ASM-Scoped security configuration

Database-Scoped Security

  • Allow to restrict or access database to access specific grid disks
  • Useful when multiple database access same ASM cluster
  • ASM-Scoped security must be configured before Database-Scoped security is implemented.

Steps to create Database-Scoped Security:
1. Shutdown database & CRS services on all compute nodes
        crsctl stop crs

2. Connect to any Exadata cell server and generate two keys, one for each database (Suppose
        we are configuring this for PROD & DEV database)
        cellcli> create key
        cellcli> create key

3. Create a cellkey.ora file under $ORACLE_HOME/admin/<db_name>/pfile directory on each
        compute nodes for each database you are configuring. Create the directories if it doesn’t
        exist.  Cellkey.ora file will contain:
        key=<generated_key>
        asm=<ASM db_unique_name>

4. Change permission level for cellkey.ora file
        chown oracle:oinstall $ORACLE_HOME/admin/<db_name>/pfile/cellkey.ora
        chmod 640 $ORACLE_HOME/admin/<db_name>/pfile/cellkey.ora
        change the owner & permission for other database’s cellkey.ora file as well.

5. Assign keys to cell servers
        dcli –l root –g ~/cell_group “cellcli –e assign key for <db1_name>=’<key>’, \
         <db2_name>=’key’”

6. Validate the key using cellcli & dcli
        dcli –l root –g ~/cell_group “cellcli –e list key”

7. Alter the grid disks and assign them to the databases as per
        Cellcli>alter griddisk <griddisk_name>,<griddisk_name> avaialbleTo=’+asm,<db1_name>’
        Note: The availableTo=’+asm’ argument is mandatory.

8. Validate the gridisk, once the above assignment is completed
        dcli –l root –g ~/cell_group “cellcli –e list griddisk attributes name,availableTo

9. Start crs & DB services on all compute nodes

That completes Database-scoped security on exadata griddisks.

Ref: 
Expert Oracle Exadata by Kerry Osborne, Randy Johnson, and Tanel Poder
Oracle Exadata Recipes by John Clarke

Bye,
Saurabh

Sunday, May 26, 2013

Energy Storage Module (ESM) Replacement on Exadata


Energy Storage Module Replacement on Exadata V2 and Exadata Expansion Rack (X2-2) Machine:

Energy Storage Module replacement on Exadata Machine is a part of Exadata Preventive Maintenance Activity which should be performed pro-actively and replace the consumable components based on its lifespan before it is get failed.

Energy Storage Module (ESM) in the PCI flash cards in the storage servers which protect the DRAM cache in the event of a power failure. Failure of ESMs will adversely impact performance however there will be no loss of data or wrong results.

We replaced 40 ESMs on our V2 & X2 machine last week, attaching the pic of ESM:

image


As per Oracle, we need to replace ESMs once in every 3 years for V2 machine and once in every 4 years for X2 machines. Preventive Maintenance Details are as below:

Model
Year End
1
2
3
4
5
6
7
Exadata V2
No
No
Yes
No
No
Yes
No
Exadata X2-2, X2-8, Expansion Rack
No
No
No
Yes
No
No
No

To monitor ESMs status, we have couple of options:
è Using ILOM, ILOM track the lifespan of F20 cards and sends notifies you when it has to be replaced.
è Using Sun Flash Accelerator F20 ESM Monitoring Utility, a script which require to be installed on storage server.

To verify the ESM lifetime value, use the following command on the storage servers:

for RISER in RISER1/PCIE1 RISER1/PCIE4 RISER2/PCIE2 RISER2/PCIE5; do ipmitool sunoem cli "show /SYS/MB/$RISER/F20CARD/UPTIME"; done | grep value -A4
 
If the "value" reported exceeds the "upper_noncritical_threshold" reported, schedule a replacement of the relevant ESM.

To replace ESMs we have two methods:

Rolling replacement – components are replaced by taking one server offline at a time while leaving overall system up.

Full System Downtime – complete system shutdown and consumable components replaced simultaneously.
As we had to replace the ESMs on 10 storage servers which require lots of maintenance time and downtime so we planned this on weekend in rolling replacement fashion. Replacing ESMs on V2 system took much more time compare to X2 due to ESMs physical connectivity inside the server.  On X2 system it took maximum 30 minutes on each server for this activity including server power off and power on.

How ESM is placed inside the server (V2):

image

However, below is the estimated maintenance window timeline given by Oracle which may vary system to system:
Specification
Full System Downtime
Rolling Method
Quarter Rack
2 - 2.5 Hours
4 Hours
Half Rack
2.5 – 4 Hours
10 Hours
Full Rack
5 – 8 Hours
20 Hours

After replacement verification:
Once ESMs are replaced successfully, we need to make sure that all the Flash Disks are showing available to the server:

To verify it, please run below command and it should show flashdisks in normal state:
CellCLI> list lun where disktype=flashdisk
         1_0     1_0     normal
         1_1     1_1     normal
         1_2     1_2     normal
         1_3     1_3     normal
         2_0     2_0     normal
         2_1     2_1     normal
         2_2     2_2     normal
         2_3     2_3     normal
         4_0     4_0     normal
         4_1     4_1     normal
         4_2     4_2     normal
         4_3     4_3     normal
         5_0     5_0     normal
         5_1     5_1     normal
         5_2     5_2     normal
         5_3     5_3     normal

Or check with below command:
lsscsi |grep -i marvel

[root@ex01ecel02 sys]# lsscsi |grep -i marvel
[8:0:0:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdn
[8:0:1:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdo
[8:0:2:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdp
[8:0:3:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdq
[9:0:0:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdr
[9:0:1:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sds
[9:0:2:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdt
[9:0:3:0]    disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdu
[10:0:0:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdv
[10:0:1:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdw
[10:0:2:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdx
[10:0:3:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdy
[11:0:0:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdz
[11:0:1:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdaa
[11:0:2:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdab
[11:0:3:0]   disk    ATA      MARVELL SD88SA02 D20Y  /dev/sdac
The above command should show 16 flash disks available.

Hope, it helps you to get a clear understanding for ESMs replacement on Exadata. J

In case of further question, kindly shoot me a mail on mail2saurav.gupt@gmail.com.

Will update on Battery Controller Replacement on Exadata very soon J

Regards,
Saurabh

Wednesday, October 24, 2012

Exadata - version compare sheet

Just comparing the hardware & software performances of exadata 3 versions:

EXADATA - VERSION COMPARE SHEET
Components
V2 Full Rack
X2-2 Full Rack
X3-2 Full Rack
Database Servers
8 x Sun Fire x4170 1U
8 x Sun Fire x4170 M2 1U
8 x Sun Fire
Database CPUs
Xeon E5540 quad core 2.53GHz
Xeon X5670 six cores 2.93GHz
 Xeon® E5-2690 Processors (2.9 GHz)
Database Cores
64
96
128
Database RAM
576GB (72 GB each)
768GB (96 GB each)
1TB (128 GB each) (expandable to 256GB each)
Storage Cells
14 x SunFire X4275
14 x SunFire X4270 M2
14
Storage Cell CPUs
Xeon E5540 quad core 2.53GHz
Xeon L5640 six cores 2.26GHz
 Xeon E5-2630L six cores (2.0Ghz)
Storage Cells CPU Cores
112
168
168
IO Performance & Capacity
15K RPM 600GB SAS or 2TB SATA 7.2K RPM disks
15K RPM 600GB SAS (HP model – high performance) or 2TB/3TB SAS 7.2K RPM disks (HC model – high capacity). Note that 2TB SAS are the same old 2 or 3 TB drives with new SAS electronics.
 10,000 RPM Disks
Flash Cache
5.3TB
5.3TB
22.4TB
InfiniBand Switches
QDR 40Gbit/s wire
QDR 40Gbit/s wire
QDR 40Gbit/s wire
Database Servers OS
Oracle Linux only
Oracle Linux (possible Solaris later, still unclear)
Oracle Linux / Solaris
Maximum Data Load Rate
12TB/hour
12TB/hour
16TB/hour
Disk Data Capacity (Raw)
100 TB (HP Disks)
100 TB (HP Disks)
100 TB (HP Disks)
504 TB (HC Disks)
504 TB (HC Disks)
504 TB (HC Disks)
Disk Data Capacity (Usable)
45 TB (HP Disks)
45 TB (HP Disks)
45 TB (HP Disks)
224 TB (HC Disks)
224 TB (HC Disks)
224 TB (HC Disks)
Maximum Disk bandwidth
25 GB/s (HP Disks)
25 GB/s (HP Disks)
25 GB/s (HP Disks)
18 GB/s (HC Disks)
18 GB/s (HC Disks)
18 GB/s (HC Disks)





Regards
Saurabh