Articles in this section
Category / Section

How to Execute Predictive Algorithms in Data Hub and Visualize Results in Bold BI

Published:

Execute Predictive Algorithms Using Data Hub and Visualize the Results in Bold BI

The following scripts have been utilised to test the predictive algorithms.

Please follow the detailed procedure below to execute a Python script within Data Hub and visualise the outcomes in Bold BI®.

Step 1: Select the Data Hub icon to launch the Data Hub site in a new browser tab.

sshot-2.png

sshot-3.png

Step 2: To create a new Pipeline, navigate to the left-hand panel and select the “Add Pipeline” button. Enter the project name and confirm by clicking the tick icon.

Picture3.png

Picture4.png

Step 3: The YAML editor will open, enabling configuration of the source and destination connectors.

sshot-4.png

Step 4: Select “PythonScript” from the left panel and choose “Add template” on the right panel to insert the sample configuration into the YAML editor.

sshot-5.png

Step 5: Within your Python script, ensure the presence of a data frame object. Subsequently, append the following line after the data frame object to transfer it to tables via Bold ETL:

pipeline.run(yourdataframename, table_name=“yourtablename”)

Replace “yourdataframename” with the actual data frame name and “yourtablename” with the intended table name in your destination database.

Step 6: Proceed by clicking the “Upload File” button located at the upper right corner.

sshot-6.png

Step 7: Select the Python script file from your local system and click the “Upload” button.

Picture5.png

Please ensure the Python script file has a “.py” extension.

Step 8: The file path will be automatically populated in the “filepath” textbox. Copy this path and paste it into the YAML editor.

sshot-8.png

sshot-11.png

Step 9: Click the “Save” button and select the destination database configured within the Data Store settings.

sshot-12.png

Screenshot 2025-11-28 131658.png

Step 10: Execution logs will be accessible in the “Logs” tab.

Screenshot 2025-11-28 134610.png

Step 11: The data frame will be created as a table within the destination database, and a corresponding data source will be established in Bold BI® Data Sources, named after the project in Data Hub.

Screenshot 2025-11-28 134809.png

Screenshot 2025-11-28 135126.png

Step 12: The table will be retained within the destination data store. For instance, if PostgreSQL is selected as the destination, upon creation of the data table in Data Hub, the table will also be present within the PostgreSQL data store.

Screenshot 2025-11-28 140822.png

Reference:
https://help.boldbi.com/working-with-data-sources/working-with-bold-data-hub/

To execute the linearReg.py sample, please install the required packages: pandas, numpy, and scikit-learn by running the command: pip install pandas numpy scikit-learn.

For Windows users, execute this command within the Command Prompt at the directory: C:\BoldServices\Python311.

Additional predictive algorithm scripts are also available.

Was this article useful?
Like
Dislike
Help us improve this page
Please provide feedback or comments
PG
Written by Pathivada Ganesh Pathivada Venki Naidu
Updated:
Comments (0)
Access denied
Access denied