Wednesday, July 20, 2016

Cloud and Clear: Cohesity Cloud Archival

Yep, we all agree to that! 
Cohesity provides a way to archive your secondary storage to the cloud or you can add cloud as a tier in addition to SSD and HDD.
Cloud Tier and Archival augments on-premise Cohesity storage, provides invisible additional storage. 
Cohesity supports Google, Amazon and Microsoft Azure as cloud providers for cloud tiering and archival.


Cohesity supports archival of VMs, Views(Datastore) and SQL Services.
This blog will cover
- setting up a Cloud Archival for a WindowsVM
- troubleshooting tips 
- reviewing the stats.

Setting up a Amazon S3 cloud archival on Cohesity:
  1.  Configure a External Target
  2.  Add the external target to an existing or a new backup policy

Cloud Archival Workflow

After the local backup is done, the snapshot will be transferred to the cloud based on the configured policy.

Here is the screenshot of a successful cloud tier archive. ( #greencloud)

And .......the archival data is in




Extra Credit:
Cloud Troubleshooting Tips ( for MAC):

a)  Installing the AWS CLI
sudo easy_install pip; sudo pip install awscli
b) Configuring the AWS CLI - region, id and keys 
aws configure
c) list S3 buckets
aws s3 ls
d) Find the job id from the URL of Cohesity UI - https://172.16.2.22/protection/job/10747/run/10748/1469037004135602/protection
e) List the encrypted S3 archival data directory




aws s3 ls s3://s3<bucketname> --recursive --human-readable --summarize|grep 10748
2016-07-20 11:10:29   14.7 GiB cohesity/icebox/7608250724803412/10747_10748_117673240_1469038227497648
2016-07-20 13:31:12   32.0 MiB cohesity/icebox/7608250724803412/10747_10748_117673240_1469046671061785

Internal Troubleshooting Tips: ( for the inquisitive mind)

Bridge Container Manages ICEBOX (cloud tiering and archival )
    a) check the logs  (increase the verbosity of bridge logs from 0 to 2 )
GFLAG                         : v, 2

[cohesity@gs1-alpha-node-2 ~]$ allssh.sh "grep curl data/logs/bridge_exec*INFO*|grep Time|tail -1"
=========== 172.16.2.11 ===========
=========== 172.16.2.13 ===========  
(Note: this is the magneto slave that is replicating the data to the cloud for this job 10748)
bridge_exec.INFO: E0720 13:28:24.495574 15328 curl_http_rpc_executor.cc:521] Executing the curl RPC: 1612 failed with error: 28, status msg: Timeout was reached, time taken: 30001 ms

=========== 172.16.2.14 ===========
=========== 172.16.2.12 ===========

 The above output informs us that there are timeouts due to cloud archival taking more than 5 minutes for this operation operation.
Bro Tip: There is a gflag to increase the timeout, if the connectivity to s3 is slow. (ping bucketname.s3.amazonaws.com)
GFLAG NAME: bridge_s3_adapter_http_request_timeout_msecs, 180000
Note: Cloud tiering uses 8 MB block size for each operation. Cloud archival uses 32 MB block size for each operation.

   b) Live Traces can be reviewed (http://cohesity_node_ip:11111 -> icebox master) and review the job instance id 10748
 
   

c) Trace (tracez - provides information on the latency of each op)





No comments:

Post a Comment