3

I'm currently setting up a postgres instance on my Kubernetes cluster hosted on OVH public cloud. The problem is that I can't access it. I know that I have to make psql -h host -U user --password -p 30904 db to connect, but the problem is that i don't know what to put instead of host. Localhost ? The ip of the master node ? Another ip ?

Thank you for your time.

postgres-deploy.yaml :

apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  type:
    NodePort
  ports:
  - port: 5432
  selector:
    app: postgres
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: postgres
spec:
  selector:
    matchLabels:
      app: postgres
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - image: postgres:13.1
        name: postgres
        env:
            - name: POSTGRES_DB
              valueFrom:
                secretKeyRef:
                  name: ident
                  key: POSTGRES_DB
          
              name: POSTGRES_USER
              valueFrom:
                secretKeyRef:
                  name: ident
                  key: POSTGRES_USER

              name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: ident
                  key: POSTGRES_PASSWORD
        ports:
        - containerPort: 5432
          name: postgres
        volumeMounts:
        - name: postgres-persistent-storage
          mountPath: /var/lib/postgresql/data2
      volumes:
      - name: postgres-persistent-storage
        persistentVolumeClaim:
          claimName: postgres-pv-claim
2
  • You need to provide more details about your environment, kubernetes configuration etc. How did you deploy this database, how did you expose it. Did you use Deployment or stateful set, which kind of service was used. It would be greate if you could provide all steps to reproduce your environment with application. Commented Nov 27, 2020 at 12:38
  • Alright thx ! So I use a kubernetes cluster created with OVH, with a set of pool running on the standard node os of OVH, so I didn't do much of that part, it's pretty much managed by OVH itself. What I did is a deployment of postgres, with a postgres service of type NodePort. I'll edit my post to put the content of my files on it. Commented Nov 27, 2020 at 12:47

2 Answers 2

3

I have deployed your YAMLs and I was able to connect.

In --h you have to provide IP of node or host machine where PostgreSQL pod was deployed.

Test Case

I have tested this on my GKE cluster.

Based on your YAMLs and this tutorial. Mainly for ConfigMap and PV/PVC configuration.

  POSTGRES_DB: postgresdb
  POSTGRES_USER: postgresadmin
  POSTGRES_PASSWORD: admin123

Connection to PostgreSQL Database

  • 1 - From SQL Pod
    $ kubectl get po
    NAME                        READY   STATUS    RESTARTS   AGE
    postgres-5586dc9864-pwpsn   1/1     Running   0          37m

You have to kubectl exec to PostgreSQL pod.

$ kubectl exec -ti postgres-5586dc9864-pwpsn -- bin/bash
root@postgres-5586dc9864-pwpsn:/#

Use psql command to connect database.

root@postgres-5586dc9864-pwpsn:/# psql -p 5432 -U postgresadmin -d postgresdb
psql (13.1 (Debian 13.1-1.pgdg100+1))
Type "help" for help.

postgresdb=#
  • 2 - From cluster

Service Details:

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.28.0.1    <none>        443/TCP          60m
postgres     NodePort    10.28.14.2   <none>        5432:31431/TCP   45m

Pod Details:

$ kubectl get po -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP          NODE                                       NOMINATED NODE   READINESS GATES
postgres-5586dc9864-pwpsn   1/1     Running   0          41m   10.24.1.6   gke-cluster-1-default-pool-8baf2b67-jjjh   <none>           <none>

Node details, where PostgreSQL pod was deployed.

$ kubectl get node -o wide | grep gke-cluster-1-default-pool-8baf2b67-jjjh
gke-cluster-1-default-pool-8baf2b67-jjjh   Ready    <none>   56m   v1.16.15-gke.4300   10.154.15.222   35.197.210.241   Container-Optimized OS from Google   4.19.112+        docker://19.3.1

NodeIP address of node where PostgreSQL pod was deployed is 10.154.15.222.

Command:

$ kubectl exec -ti <podname> -- psql -h <Internal IP address of hosting node> -U postgresadmin --password -p <nodeport number from service> <database name>

Output:

$ kubectl exec -ti postgres-5586dc9864-pwpsn -- psql -h 10.154.15.222 -U postgresadmin --password -p 31431 postgresdb
Password:
psql (13.1 (Debian 13.1-1.pgdg100+1))
Type "help" for help.

postgresdb=#
Sign up to request clarification or add additional context in comments.

Comments

0

You can try something like this:

kubectl exec -it $(kubectl get pods -n %YOURNAMESPACE% | grep postgres | cut -d " " -f1) -n %YOURNAMESPACE% -- bash -c "psql -U postgres -c 'SELECT current_database()'"

Or omit the namespace thing if you need it in current namespace:

kubectl exec -it $(kubectl get pods | grep postgres | cut -d " " -f1) -n -- bash -c "psql -U postgres -c 'SELECT current_database()'"

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.