Articles in this section
Category / Section

Troubleshooting Dashboard Publish and Edit Data Loss on Kubernetes Deployments

Published:

Issue Description

In the Bold BI Kubernetes deployment, it has been observed that data configurations for Datasources and Widgets may be lost during publishing or dashboard editing when multiple replicas of the Bold BI DataService are running. This issue arises due to concurrent read/write operations across replicas, which result in file deletion.

👉 As a temporary workaround, we recommend restricting the DataService to a single replica until a permanent fix is identified.


How to Restrict DataService to a Single Replica

Option 1: If You Have Enough Node Resources

You can continue to run the service on a larger node with sufficient CPU and memory.

Helm Installation Method

  1. In your values.yaml, set replicaCount, minReplicas, and maxReplicas to 1, and adjust cpuResourceRequests (between “1” to “2”) and memoryResourceRequests (between 4Gi to 8Gi) based on your requirements and node capacity:
     - app: bi-dataservice
       replicaCount: 1
       minReplicas: 1
       maxReplicas: 20
       cpuResourceRequests: 250m # adjust between "1" to "2" as needed
       memoryResourceRequests: 750Mi  # adjust between "4Gi" to "8Gi" as needed
    
  2. Upgrade the Helm release:
    helm upgrade <release-name> boldbi/boldbi -n bold-services -f values.yaml
    

Manual Update with kubectl

  1. Scale down replicas to 1:
    kubectl scale deployment bi-dataservice-deployment --replicas=1 -n bold-services
    
  2. Patch the Horizontal Pod Autoscaler (HPA) to restrict replicas:
    kubectl patch hpa bi-dataservice-hpa -n bold-services --type merge -p '{\"spec\":{\"maxReplicas\":1,\"minReplicas\":1}}'
    
  3. Patch CPU/Memory requests according to your requirement and node capacity:
     kubectl set resources deployment bi-dataservice-deployment -n bold-services --containers="bi-dataservice-container" --requests=cpu=250m,memory=750Mi
    
    💡 Adjust the cpu (e.g., 1, 2) and memory (e.g., 4Gi, 8Gi) values as needed based on your workload and cluster resources.

Option 2: If You Have Limited Node Resources

When your cluster nodes are small, you should isolate DataService to prevent contention.

  1. Add a taint to reserve a node for DataService:
    kubectl taint nodes <node-name> bi-dataservice=dedicated:NoSchedule
    
  2. Add a toleration to the deployment so that pods can schedule on the tainted node:
    kubectl patch deployment bi-dataservice-deployment -n bold-services --type merge -p '{\"spec\":{\"template\":{\"spec\":{\"tolerations\":[{\"key\":\"bi-dataservice\",\"operator\":\"Equal\",\"value\":\"dedicated\",\"effect\":\"NoSchedule\"}]}}}}'
    
  3. Restrict replicas to 1:
    kubectl scale deployment bi-dataservice-deployment --replicas=1 -n bold-services
    kubectl patch hpa bi-dataservice-hpa -n bold-services --type merge -p '{\"spec\":{\"maxReplicas\":1,\"minReplicas\":1}}'
    
  4. Patch CPU/Memory requests according to your requirement and node capacity:
     kubectl set resources deployment bi-dataservice-deployment -n bold-services --containers="bi-dataservice-container" --requests=cpu=250m,memory=750Mi
    
    💡 Adjust the cpu (e.g., 1, 2) and memory (e.g., 4Gi, 8Gi) values as needed based on your workload and cluster resources.

Note: If you are using Helm for upgrades, you must reapply the above changes after each Helm upgrade to ensure effective isolation.


Summary

  • Problem: File deletion caused by concurrent replicas of bi-dataservice-deployment.
  • Workaround: Restrict DataService to a single replica.
  • Methods:
    • Option 1: Limit replicas via Helm/HPA (if enough resources).
    • Option 2: Use taints/tolerations and replica restriction (if node size is small).

👉 Until a permanent solution is implemented, keeping the Bold BI DataService at single replica is the safest way to avoid file corruption/deletion.

Was this article useful?
Like
Dislike
Help us improve this page
Please provide feedback or comments
SR
Written by Sivakumar Ravindran
Updated:
Comments (0)
Please  to leave a comment
Access denied
Access denied