Pages

Saturday, October 20, 2018

OrbitDB: Decentralized Database on IPFS

OrbitDB is a serverless, distributed, peer-to-peer database. OrbitDB uses IPFS as it's data storage and IPFS Pubsub to automatically sync databases with peers. The metadata of databases is stored in data directory orbitdb. The data blocks saved in IPFS are in orbitdb/ipfs.

Download Node.js in /tmp from https://nodejs.org/dist/v8.12.0/node-v8.12.0-linux-x64.tar.xz

$ tar xvf /tmp/node-v8.12.0-linux-x64.tar.xz
$ ln -s node-v8.12.0-linux-x64 node

Update PATH in .bashrc with ~/node/bin

$ npm install ipfs -g --save

$ jsipfs init
initializing ipfs node at $HOME/.jsipfs
generating 2048-bit RSA keypair...done
peer identity: QmSANGMer1x8YGHYtoer8kStRXuWCj1DN1F28zKeW3JwwR
to get started, enter:

jsipfs cat /ipfs/QmfGBRT6BbWJd7yUc2uYdaUZJBbnEFvTqehPFoSMQ6wgdr/readme

$ jsipfs daemon
Initializing daemon...
Swarm listening on /ip4/127.0.0.1/tcp/4003/ws/ipfs/QmSANGMer1x8YGHYtoer8kStRXuWCj1DN1F28zKeW3JwwR
Swarm listening on /ip4/127.0.0.1/tcp/4002/ipfs/QmSANGMer1x8YGHYtoer8kStRXuWCj1DN1F28zKeW3JwwR
Swarm listening on /ip4/192.168.1.15/tcp/4002/ipfs/QmSANGMer1x8YGHYtoer8kStRXuWCj1DN1F28zKeW3JwwR
API is listening on: /ip4/127.0.0.1/tcp/5002
Gateway (readonly) is listening on: /ip4/127.0.0.1/tcp/9090
Daemon is ready

Web Console URL   : http://127.0.0.1:5002/webui
Local Gateway URL : http://127.0.0.1:9090/ipfs/

$ ls mysite
img index.html
$ jsipfs add -r mysite
added QmcMN2wqoun88SVF5own7D5LUpnHwDA6ALZnVdFXhnYhAs mysite/img/spacecat.jpg
added QmS8tC5NJqajBB5qFhcA1auav14iHMnoMZJWfmr4k3EY6w mysite/img
added QmYh6HbZhHABQXrkQZ4aRRSoSa6bb9vaKoHeumWex6HRsT mysite/index.html
added QmYeAiiK1UfB8MGLRefok1N7vBTyX8hGPuMXZ4Xq1DPyt7 mysite/

$ npm install orbit-db-cli -g --save

Create database. Type can be one of eventlog|feed|docstore|keyvalue|counter
$ orbitdb create hello feed
/orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello

Add an entry to database
$ orbitdb add /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello "world"
Added QmZze55TaD55uu4TqTdkvRpwyXubNXXZRkqfqF9RbHbupP

Query the database
$ orbitdb get /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello
"world"

Delete an entry from a database.
$ orbitdb del /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello QmZze55TaD55uu4TqTdkvRpwyXubNXXZRkqfqF9RbHbupP
Deleted QmZze55TaD55uu4TqTdkvRpwyXubNXXZRkqfqF9RbHbupP

$ orbitdb get /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello
Database '/orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello' is empty!

Show information about a database
$ orbitdb info /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello
/orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello
> Type: feed
> Owner: /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello
> Data file: ./orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello
> Entries: 2
> Oplog length: 2 / 2
> Write-access:
> 04ea354bb28cfaada7c61d926103face8d357f6d829c80737e19a0423a75147db224ae4a19e2ca35abd09818af403b300fe0a31003534fc8f1800a8ed506c33a2e

Remove a database locally.
$ orbitdb drop /orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello yes
Dropped database '/orbitdb/QmeBA9kqJkYb83xDttsZbrxjbFejj3pjdF9j52eSGCVxYW/hello'
------------------------------

$ orbitdb create demo docstore
/orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo

$ orbitdb put /orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo "{\"_id\":1,\"name\":\"FRANK\"}" --indexBy name
Added document 'FRANK'

$ orbitdb get /orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo "FRANK"
Searching for 'FRANK' from '/orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo'
┌──────────────────────────────────────────────────┬───┐
│name                                                                                                         │_id   │
├──────────────────────────────────────────────────┼───┤
│FRANK                                                                                                      │1     │
└──────────────────────────────────────────────────┴───┘
Found 1 matches (1 ms)

$ orbitdb replicate /orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo --progress
Swarm listening on /ip4/127.0.0.1/tcp/41771/ipfs/QmPyG6LGz8E6qHbq5XiafdmJo584NEKiitXfKqcTo53DV3
Swarm listening on /ip4/192.168.1.15/tcp/41771/ipfs/QmPyG6LGz8E6qHbq5XiafdmJo584NEKiitXfKqcTo53DV3
Loading '/orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo' (docstore)
Loading '/orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo' ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0/1 |   0.0% | 00:00:00
████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 1/1 | 100.0% | 00:00:00
Replicating '/orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo' 
████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 1/1 | 100.0% | 00:00:00

$ orbitdb del /orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo "FRANK"
Deleted FRANK

$ orbitdb drop /orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo yes
Dropped database '/orbitdb/QmQzVvyVB2ZYTApek7VaejWA5gmt51S8gwdk1XnwYdN4Tq/demo'
--------------------

$ orbitdb create demo keyvalue
/orbitdb/QmSjz8cQRGHBX8vC6SRt4pMnxfUJzW6rYJhgpPd2kaHBDT/demo

$ orbitdb set /orbitdb/QmSjz8cQRGHBX8vC6SRt4pMnxfUJzW6rYJhgpPd2kaHBDT/demo "volume" 100
'volume' set to '100' (QmcnimRojRMgzKLZkHnHDnQ6G5hf1JW9mLdK6XW5WzUfqm)

$ orbitdb get /orbitdb/QmSjz8cQRGHBX8vC6SRt4pMnxfUJzW6rYJhgpPd2kaHBDT/demo volume
100

$ orbitdb del /orbitdb/QmSjz8cQRGHBX8vC6SRt4pMnxfUJzW6rYJhgpPd2kaHBDT/demo volume
Deleted volume

$ orbitdb drop /orbitdb/QmSjz8cQRGHBX8vC6SRt4pMnxfUJzW6rYJhgpPd2kaHBDT/demo yes
Dropped database '/orbitdb/QmSjz8cQRGHBX8vC6SRt4pMnxfUJzW6rYJhgpPd2kaHBDT/demo'
---------------------

$ npm install orbit-db -g --save

$ cat keyvalue.js

'use strict'

const IPFS = require('./node/lib/node_modules/ipfs')
const OrbitDB = require('./node/lib/node_modules/orbit-db')

// OrbitDB uses Pubsub which is an experimental feature
// and need to be turned on manually.

// Create IPFS instance
const ipfs = new IPFS({
  EXPERIMENTAL: {
    pubsub: true
  }
})

ipfs.on('error', (err) => console.error(err))

ipfs.on('ready', async () => {
  let db
  try {
// Create OrbitDB instance
    const orbitdb = new OrbitDB(ipfs)

// Create/Open a database
    const db = await orbitdb.kvstore('first_database')
// Load the database locally before using it
    await db.load()
// database address
    console.log(db.address.toString())
    // /orbitdb/Qmd8TmZrWASypEp4Er9tgWP4kCNQnW4ncSnvjvyHQ3EVSU/first-database
// Add an entry
    await db.put('name', 'hello')
// Get an entry
    console.log(db.get('name'))
// Query the database
// Delete an entry from a database
    await db.del(db.get('name'))
// Remove a database locally
    await db.drop(db.address)
  } catch (e) {
    console.error(e)
    process.exit(1)
  }
})

$ node keyvalue.js



Sunday, October 14, 2018

Hosting Website on IPFS

Interplanetary File System (IPFS) is an open source, peer-to-peer distributed hypermedia protocol that acts as a sort of combination of Kodemila, BitTorrent, and Git to create a distributed subsystem of the Internet. The objects in IPFS are content addressed.

Install IPFS.

$ cd /tmp
$ wget https://dist.ipfs.io/go-ipfs/v0.4.17/go-ipfs_v0.4.17_linux-amd64.tar.gz
$ tar zxvf /tmp/go-ipfs_v0.4.17_linux-amd64.tar.gz

Update $PATH with ~/go-ipfs in .bash_profile

Intialize the repository. IPFS stores all its settings and internal data in a directory called $HOME/.ipfs

$ ipfs init

initializing IPFS node at $HOME/.ipfs
generating 2048-bit RSA keypair...done
peer identity: Qmcpo2iLBikrdf1d6QU6vXuNb6P7hwrbNPW9kLAH8eG67z
to get started, enter:

ipfs cat /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme

$ ipfs cat /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme

Run the IPFS node

$ ipfs daemon

API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready

List peers with open connections

$ ipfs swarm peers

Web Console URL   : http://127.0.0.1:5001/webui
Local Gateway URL : http://127.0.0.1:8080/ipfs

Add website content to IPFS

$ ls mysite
img index.html
$ ipfs add -r mysite                       --> Choose CIDv1 by passing a version flag --cid-version 1
added QmcMN2wqoun88SVF5own7D5LUpnHwDA6ALZnVdFXhnYhAs mysite/img/spacecat.jpg
added QmS8tC5NJqajBB5qFhcA1auav14iHMnoMZJWfmr4k3EY6w mysite/img
added QmYh6HbZhHABQXrkQZ4aRRSoSa6bb9vaKoHeumWex6HRsT mysite/index.html
added QmYeAiiK1UfB8MGLRefok1N7vBTyX8hGPuMXZ4Xq1DPyt7 mysite/

The hash on last line is root CID of the website.

Visit your site on local gateway node at http:/127.0.0.1:8080/ipfs/<rootCID>
Next, you can also visit it on another ipfs node by opening https://gateway.ipfs.io/ipfs/<rootCID> or https://ipfs.infura.io/ipfs/<rootCID>
When you make changes to the content, add it again to IPFS. We can get the newest version of the site at any time by publishing to IPNS. Interplanetary Name System (IPNS) is a system for creating and updating mutable links to IPFS content.

Publish to IPNS

$ ipfs name publish <NEW_rootCID>
Published to $PEER_ID: /ipfs/<NEW_rootCID>

To get rid of hashes, add DNS TXT record containing name as _dnslink and target as dnslink=/ipns/$PEER_ID
Now you should be able to view your site at http://127.0.0.1:8080/ipns/your.domain or at http://gateway.ipfs.io/ipns/your.domain
Add DNS CNAME record target as gateway.ipfs.io so that you can visit your site at http://your.domain

Add content with CID v1. The rootCID will be pinned recursively by default.
$ ipfs add -r --cid-version 1 mysite

List all objects pinned in the local storage.
$ ipfs pin ls --type=all

List only rootCIDs. 
$ ipfs pin ls --type=recursive

Unpin the target folder.
$ ipfs pin rm <rootCID>

Perform garbage collection on the repository
$ ipfs repo gc

The rootCID hash was removed from local storage, the data still exists on IPFS. The stale content is removed from the IPFS network after a period of time. So if you wanted your data to stay permanently on the network you had to pin it or use a Pinning Service.

Remote Pinning Service

As the local IPFS node is not always available, routine garbage collection and the data to stay permanently, it might be helpful to use remote pinning service.

Create an account and get an API token for free using email on https://nft.storage/

Add remote service
$ ipfs pin remote service add nft_storage https://nft.storage/api NFT_KEY

List remote service
$ ipfs pin remote service ls
nft_storage https://nft.storage/api

Remove remote service. 
$ ipfs pin remote service rm nft_storage

Pin the content remotely. In WebUI, Files > enter cid > click Browse > Click ...More > Set pinning > Check nft_storage > Click on Apply
$ ipfs pin remote add <root_CID> --service nft_storage

Unpin CID remotely
$ ipfs pin remote rm --cid <cid> --service nft_storage

Check pin status. 
https://api.nft.storage/check/<cid>
Pin Status column at https://nft.storage/files/























Sunday, June 10, 2018

Installing Vault for Stramsets


Download the Vault zip file from the Hashicorp Vault website:  https://www.vaultproject.io/downloads.html

Create vault user.
$ sudo useradd -r -g daemon -d /opt/vault -m -s /sbin/nologin -c "Vault user" vault
$ cd /tmp
$ wget https://releases.hashicorp.com/vault/0.10.1/vault_0.10.1_linux_amd64.zip
$ sudo mkdir -p /opt/vault/bin
$ cd /opt/vault/bin
$ sudo unzip /tmp/vault_0.10.1_linux_amd64.zip
$ sudo ln -s vault /usr/local/bin/vault
$ sudo chown -R vault:root /opt/vault
$ sudo chmod -R 755 /opt/vault
$ sudo mkdir /opt/vault/conf

Change to vault user
$ sudo su - vault -s /bin/bash

$ cd /opt/vault/conf
Create configuration file vault-conf.hcl
ui = true

listener "tcp" {
  address = "0.0.0.0:8200"
  tls_disable = 1
}

storage "s3" {
    bucket = "vault"
    region = "us-east-1"
}

disable_mlock=true

Start Vault
$ nohup vault server -config=/opt/vault/conf/vault-conf.hcl > /var/log/vault/vault-debug.log 2>&1 &
$ jobs
Verify Server is running
$ vault status
$ export VAULT_ADDR=http://0.0.0.0:8200
Intialize Vault
$ vault operator init
Copy keys to /opt/vault/keys/vault_keys.txt
This will output five Unseal Keys, which need to be used to unseal the Vault. The initialization defaults to minimum of three tokens to allow unseal. There will also be Initial Root Token in the output. It should only be used for initial configuration, while recurring operations should be done by policy-constrained tokens.
Unseal Vault with three different keys.
$ vault operator unseal

Authenticate with Vault as Initial Root Token to perform any operations on an unsealed Vault.
$ vault login

Enable Audit Logging
$ vault audit enable file file_path=/var/log/vault/vault-audit.log

List enabled auth methods
$ vault auth list
Path      Type     Description
----      ----     -----------
token/    token    token based credentials

Integrate Active Directory

Enable LDAP auth method.
$ vault auth enable ldap
Success! Enabled ldap auth method at: ldap/

$ vault write auth/ldap/config \
    binddn="cn=Lkup_user,ou=Standard,ou=Service Accounts,ou=Accounts,ou=abc,ou=External,ou=xyz,dc=example,dc=com" \
    bindpass='passwd' \
    deny_null_bind=true \
    discoverdn=false \
    groupattr="cn" \
    groupdn="OU=Standard,OU=Groups,OU=Prod,OU=abc,OU=External,OU=xyz,DC=example,DC=com" \
    groupfilter="(&(objectClass=group)(member:1.2.840.113556.1.4.1941:={{.UserDN}}))" \
    insecure_tls=true \
    starttls=false \
    url="ldap://ldap.example.com" \
    userattr="samaccountname" \
    userdn="OU=Users,OU=Accounts,OU=xyz,DC=example,DC=com"

# cat admin-policy.hcl
# List existing auth methods
path "sys/auth"
{
  capabilities = ["read"]
}

# Manage auth methods broadly across Vault
path "sys/auth/*"
{
  capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}

# List existing policies
path "sys/policy"
{
  capabilities = ["read"]
}

path "sys/policies"
{
  capabilities = ["read"]
}

# Create and manage ACL policies broadly across Vault
path "sys/policy/*"
{
  capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}

path "sys/policies/*"
{
  capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}

# List existing KV secrets v2
path "secret"
{
  capabilities = ["read"]
}

# Create and manage KV secrets engine v2 broadly across Vault.
path "secret/*"
{
  capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}

# List existing databases
path "database"
{
  capabilities = ["read"]
}

# Create and manage databases broadly across Vault.
path "database/*"
{
  capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}

# List existing identities
path "identity"
{
  capabilities = ["read"]
}

# Create and manage identities broadly across Vault.
path "identity/*"
{
  capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}

# Read health checks
path "sys/health"
{
  capabilities = ["read", "sudo"]
}

# To perform capabilities of token
path "sys/capabilities"
{
  capabilities = ["create", "update"]
}

path "sys/capabilities-self"
{
  capabilities = ["create", "update"]
}

Create admin policy.
$ vault policy write admin admin-policy.hcl
Map LDAP group to policies. For example, map vault-admin group to admin policy.
$ vault write auth/ldap/groups/vault-admin policies=admin
-----------------------------------------------------------------------------------------
          AD Group                                      Vault Path
-----------------------------------------------------------------------------------------
vault-admin                                        secret/*
vault-admin-<env>-mysql        secret/<env>/MySQL/<key-name>
vault-admin-<env>-postgres    secret/<env>/PostgreSQL/<key-name>
vault-admin-<env>-oracle        secret/<env>/Oracle/<key-name>
vault-admin-<env>-hadoop      secret/<env>/Hadoop/<key-name>
vault-read-<env>-mysql          secret/<env>/MySQL/<key-name>
vault-read-<env>-postgres      secret/<env>/PostgreSQL/<key-name>
vault-read-<env>-oracle          secret/<env>/Oracle/<key-name>
vault-read-<env>-hadoop        secret/<env>/Hadoop/<key-name>


<env> - prod/impl/test/dev
<key-name> - oracle_instance or mysql_name etc

List enabled secret backends
$ vault secrets list
Path          Type         Description
----          ----         -----------
cubbyhole/    cubbyhole    per-token private secret storage
identity/     identity     identity store
secret/       kv           key/value secret storage
sys/          system       system endpoints used for control, policy and debugging

Disable KV secrets engine – version 1
$ vault secrets disable secret
Enable KV secrets engine - version 2
$ vault secrets enable -path=secret -description="key/value secret storage v2" -version=2 kv
List existing keys
$ vault kv list secret
$ vault kv put secret/prod/Oracle/<instance_name> username=<user> password=<passwd>
$ curl -s -H "X-Vault-Token: $VAULT_TOKEN" -x GET $VAULT_ADDR/v1/secret/data/prod/Oracle/<instance_name> | jq '.data.data.username,.data.data.password';

Configure AppRole for Streamsets

Enable AppRole auth method.
$ vault auth enable approle
Success! Enabled approle auth method at: approle/

$ vault auth list
Path        Type       Description
----        ----       -----------
approle/    approle    n/a
token/      token      token based credentials

Create a policy.
$ tee sdc-approle-pol.hcl <<EOF
# Login with AppRole
path "auth/approle/login" {
  capabilities = [ "create", "read" ]
}

# Mount the AppRole auth method
path "sys/auth/approle" {
  capabilities = [ "create", "read", "update", "delete", "sudo" ]
}

# Configure the AppRole auth method
path "sys/auth/approle/*" {
  capabilities = [ "create", "read", "update", "delete" ]
}

# Create and manage roles
path "auth/approle/*" {
  capabilities = [ "create", "read", "update", "delete", "list" ]
}

# Write ACL policies
path "sys/policy/*" {
  capabilities = [ "create", "read", "update", "delete", "list" ]
}

# Write test data
path "secret/test/MySQL/app-sdc/*" {
  capabilities = [ "create", "read", "update", "delete", "list" ]
}
EOF

$ vault policy write sdc-approle-pol sdc-approle-pol.hcl

$ vault policy list
admin
default
sdc-approle-pol
root

Create a role with policy attached.
$ vault write auth/approle/role/app-sdc policies="sdc-approle-pol" secret_id_ttl=120m token_ttl="60m" token_max_ttl="120m"

Read the role.
$ vault read auth/approle/role/app-sdc
Fetch RoleID for app-sdc role.
$ vault read auth/approle/role/app-sdc/role-id
Generate a new SecretID for app-sdc role.
$ vault write -f auth/approle/role/app-sdc/secret-id

$ vault read auth/approle/role/app-sdc/role-id
Key                 Value
---                   -----
role_id        36015ef7-875a-3765-4e74-e6b1ccdc5d3b
$ vault write -f auth/approle/role/app-sdc/secret-id
Key                   Value
---                   -----
secret_id             84d34edc-cb7f-0eb8-dc59-f91c47e9e9fe
secret_id_accessor    a8f01a19-ec71-f70c-c564-0457e7a6213a

Authenticate with Role ID and Secret ID­­­­ to receive AppRole token.
$ vault write auth/approle/login role_id="<role-id>" secret_id="<secret-id>"

Once the authentication successful, Vault will provide a token to the application that can be used to request secrets.
Streamsets can authenticate with Vault using the Role ID in properties, and Secret ID provided by a file.
Update credentialStore.vault.config.role.id= in /etc/sdc/credential-stores.properties
Create /etc/sdc/vault-secret-id file with <secret-id>

Using admin user’s token, write a Secret to the path secret/test/MySQL/app-sdc
$ vault kv put secret/test/MySQL/app-sdc sdc_mysql_user=<user> sdc_mysql_pass=<passwd>
Read secrets using the AppRole token.
$ VAULT_TOKEN=<AppRole_token> vault kv get secret/test/MySQL/app-sdc
Alternativtely, you can first authenticate with vault using the client_token.
$ vault login <AppRole_token>
$ vault kv get secret/test/MySQL/app-sdc

Launch Vault UI : http://vault_host:8200/ui

Use the following notation for credentials in streamsets pipeline to access the secrets in Vault.
${vault:read("secret/test/MySQL/app-sdc/sdc_mysql_user","value")}
${vault:read("secret/test/MySQL/app-sdc/sdc_mysql_pass","value")}

MySQL Dynamic Credentials with Vault


Enable the database secrets engine
$ vault secrets enable -description="database dynamic secret storage"  database
Configure Vault with MySQL plugin and connection information by passing role as read from any of the tables or update specific table and to be able to read from any of the tables etc. A role is a logical name that maps to a policy used to generate credentials.
$  vault write database/config/<mysql-name> \
    plugin_name=mysql-database-plugin \
    allowed_roles=<my-role> \
    connection_url="{{username}}:{{password}}@tcp(127.0.0.1:3306)/" \
    username="root" password="mysql"
Configure allowed role to create dynamic credential. The values within the {{value}} will be filled in by vault.
$ vault write database/roles/<my-role> \
    db_name=<mysql-name> \
    creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT ON *.* TO '{{name}}'@'%';" \
    default_ttl="1h" max_ttl="24h"

Create a policy. The client app needs to be able to read from the <my-role> role endpoint.
$ tee <my-role>-db-read.hcl <<EOF
path "database/creds/<my-role>" {
   capabilities = ["read"]
}
EOF
$ vault policy write <my-role>-db-read <my-role>-db-read.hcl

Create a new token with <my-role>-db-read policy
$ vault token create -policy="<my-role>-db-read" -wrap-ttl=20m

Generate a new set of credentials. They do not exist until they are accessed.
$ vault read database/creds/<my-role&gt;