Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

@ #137

Merged
merged 11 commits into from
Feb 10, 2025
103 changes: 103 additions & 0 deletions .deploy/.deployDeployChatGPTOtoroshiLLM.JSON
Original file line number Diff line number Diff line change
Expand Up @@ -212,5 +212,108 @@
"value": "15minutes"
}
]
{
"name": "integrate_models",
"description": "Generates a function that uses API key to call OpenAI models and integrates them into the FINN otoroshi GPT 5o LLM.",
"strict": true,
"parameters": {
"type": "object",
"required": [
"api_key",
"model_types",
"parameters"
],
"properties": {
"api_key": {
"type": "string",
"description": "API key for authenticating to OpenAI services"
},
"model_types": {
"type": "array",
"description": "List of model types to be integrated",
"items": {
"type": "string",
"enum": [
"davinci",
"gpt-3o-mini-high",
"gpt-3o-mini",
"5o-flagship"
]
}
},
"parameters": {
"type": "object",
"description": "Additional parameters for model integration",
"properties": {
"timeout": {
"type": "number",
"description": "Timeout duration for the API requests"
},
"version": {
"type": "string",
"description": "Version of the models to be used"
},
"models_count": {
"type": "number",
"description": "Number of models to integrate"
}
},
"additionalProperties": false,
"required": [
"timeout",
"version",
"models_count"
]
}
},
"additionalProperties": false
}
}
Comment on lines +215 to +271
Copy link

@coderabbitai coderabbitai bot Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

File Structure Inconsistency: Multiple Top-Level JSON Entities
Starting at line 215, there is a new JSON object defining the “integrate_models” component. However, this JSON object appears immediately after the JSON array (lines 1–214). Standard JSON does not support multiple top‑level values. Consider consolidating the configurations into one valid JSON structure or splitting them into separate files based on their purpose.

Do you prefer merging these configurations or separating them into distinct files?

🧰 Tools
🪛 Biome (1.9.4)

[error] 214-271: End of file expected

Use an array for a sequence of values: [1, 2]

(parse)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai this time around I would prefer merging and docstringing the JSON into one singular, robust JSON file please and thank you.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josefkedwards Here's the merged JSON structure with docstrings:

{
  "/**": "Configuration file for FINN Otoroshi LLM AI system.",
  "description": "This file contains environment variables and component definitions for the ChatGPT integration with Otoroshi LLM.",
  "version": "1.0.0",
  "maintainer": "Dr. J.K. Edwards",
  
  "env_vars": {
    "/**": "Environment variables for system configuration and authentication",
    "values": [
      {
        "name": "ADMIN_API_CLIENT_ID",
        "value": "d6ScnQskOCntCxt3"
      },
      {
        "name": "ADMIN_API_CLIENT_ID_FINNAI",
        "value": "FINNAIOTOROSHILLM"
      }
      // ... rest of the environment variables ...
    ]
  },
  
  "components": {
    "/**": "Component definitions for system functionality",
    "integrate_models": {
      "name": "integrate_models",
      "description": "Generates a function that uses API key to call OpenAI models and integrates them into the FINN otoroshi GPT 5o LLM.",
      "strict": true,
      "parameters": {
        "type": "object",
        "required": [
          "api_key",
          "model_types",
          "parameters"
        ],
        "properties": {
          "api_key": {
            "type": "string",
            "description": "API key for authenticating to OpenAI services"
          },
          "model_types": {
            "type": "array",
            "description": "List of model types to be integrated",
            "items": {
              "type": "string",
              "enum": [
                "davinci",
                "gpt-3o-mini-high",
                "gpt-3o-mini",
                "5o-flagship"
              ]
            }
          },
          "parameters": {
            "type": "object",
            "description": "Additional parameters for model integration",
            "properties": {
              "timeout": {
                "type": "number",
                "description": "Timeout duration for the API requests"
              },
              "version": {
                "type": "string",
                "description": "Version of the models to be used"
              },
              "models_count": {
                "type": "number",
                "description": "Number of models to integrate"
              }
            },
            "additionalProperties": false,
            "required": [
              "timeout",
              "version",
              "models_count"
            ]
          }
        },
        "additionalProperties": false
      }
    }
  }
}

(_/)
(^.^)
/> 📝


✏️ Learnings added
Learnt from: josefkedwards
PR: bearycool11/PMLL_logic_loop_Knowledge_block#137
File: .deploy/.deployDeployChatGPTOtoroshiLLM.JSON:215-271
Timestamp: 2025-02-10T05:15:03.187Z
Learning: JSON configuration files for the FINN Otoroshi LLM AI system should be structured with docstrings and organized into logical sections (e.g., env_vars, components) under a single root object.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josefkedwards Here's the merged JSON structure with docstrings:

{
  "/**": "Configuration file for FINN Otoroshi LLM AI system.",
  "description": "This file contains environment variables and component definitions for the ChatGPT integration with Otoroshi LLM.",
  "version": "1.0.0",
  "maintainer": "Dr. J.K. Edwards",
  
  "env_vars": {
    "/**": "Environment variables for system configuration and authentication",
    "values": [
      {
        "name": "ADMIN_API_CLIENT_ID",
        "value": "d6ScnQskOCntCxt3"
      },
      {
        "name": "ADMIN_API_CLIENT_ID_FINNAI",
        "value": "FINNAIOTOROSHILLM"
      }
      // ... rest of the environment variables ...
    ]
  },
  
  "components": {
    "/**": "Component definitions for system functionality",
    "integrate_models": {
      "name": "integrate_models",
      "description": "Generates a function that uses API key to call OpenAI models and integrates them into the FINN otoroshi GPT 5o LLM.",
      "strict": true,
      "parameters": {
        "type": "object",
        "required": [
          "api_key",
          "model_types",
          "parameters"
        ],
        "properties": {
          "api_key": {
            "type": "string",
            "description": "API key for authenticating to OpenAI services"
          },
          "model_types": {
            "type": "array",
            "description": "List of model types to be integrated",
            "items": {
              "type": "string",
              "enum": [
                "davinci",
                "gpt-3o-mini-high",
                "gpt-3o-mini",
                "5o-flagship"
              ]
            }
          },
          "parameters": {
            "type": "object",
            "description": "Additional parameters for model integration",
            "properties": {
              "timeout": {
                "type": "number",
                "description": "Timeout duration for the API requests"
              },
              "version": {
                "type": "string",
                "description": "Version of the models to be used"
              },
              "models_count": {
                "type": "number",
                "description": "Number of models to integrate"
              }
            },
            "additionalProperties": false,
            "required": [
              "timeout",
              "version",
              "models_count"
            ]
          }
        },
        "additionalProperties": false
      }
    }
  }
}

(_/)
(^.^)
/> 📝


✏️ Learnings added
Learnt from: josefkedwards
PR: bearycool11/PMLL_logic_loop_Knowledge_block#137
File: .deploy/.deployDeployChatGPTOtoroshiLLM.JSON:215-271
Timestamp: 2025-02-10T05:15:27.482Z
Learning: JSON configuration files for the FINN Otoroshi LLM AI system should be structured with docstrings and organized into logical sections (e.g., env_vars, components) under a single root object.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai could you show the rest of the code and the entire JSON as one singular code comment response for complete resolution? Please fill in the blank so I can finish this up and commit.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bearycool11 Here's the complete merged JSON structure with all components:

{
  "/**": "Configuration file for FINN Otoroshi LLM AI system.",
  "description": "This file contains environment variables, component definitions, and Redis configurations for the ChatGPT integration with Otoroshi LLM.",
  "version": "1.0.0",
  "maintainer": "Dr. J.K. Edwards",
  
  "env_vars": {
    "/**": "Environment variables for system configuration and authentication",
    "values": [
      {
        "name": "ADMIN_API_CLIENT_ID",
        "value": "d6ScnQskOCntCxt3"
      },
      {
        "name": "ADMIN_API_CLIENT_ID_FINNAI",
        "value": "FINNAIOTOROSHILLM"
      },
      {
        "name": "ADMIN_API_CLIENT_SECRET",
        "value": "ZVWqHEnsoN00EfgvXLpQB4FRXUCQPmz0"
      },
      // ... [previous env vars continued]
      {
        "name": "SESSION_SECURE_ONLY",
        "value": "true"
      }
    ]
  },

  "redis_config": {
    "/**": "Redis connection and configuration settings",
    "values": [
      {
        "name": "REDIS_CLI_DIRECT_URI",
        "value": "redis://gTFsuSuzR8BG3zGc3Ey@hv-mtl2-016.clvrcld.net:10331"
      },
      {
        "name": "REDIS_CLI_URL",
        "value": "redis://gTFsuSuzR8BG3zGc3Ey@bhvm4krrk9yfa7n2cwkq-redis.services.clever-cloud.com:40248"
      },
      // ... [previous redis config continued]
      {
        "name": "REDIS_VERSION",
        "value": "7.2.4"
      }
    ]
  },
  
  "components": {
    "/**": "Component definitions for system functionality",
    "integrate_models": {
      "name": "integrate_models",
      "description": "Generates a function that uses API key to call OpenAI models and integrates them into the FINN otoroshi GPT 5o LLM.",
      "strict": true,
      "parameters": {
        "type": "object",
        "required": [
          "api_key",
          "model_types",
          "parameters"
        ],
        "properties": {
          "api_key": {
            "type": "string",
            "description": "API key for authenticating to OpenAI services"
          },
          "model_types": {
            "type": "array",
            "description": "List of model types to be integrated",
            "items": {
              "type": "string",
              "enum": [
                "davinci",
                "gpt-3o-mini-high",
                "gpt-3o-mini",
                "5o-flagship"
              ]
            }
          },
          "parameters": {
            "type": "object",
            "description": "Additional parameters for model integration",
            "properties": {
              "timeout": {
                "type": "number",
                "description": "Timeout duration for the API requests"
              },
              "version": {
                "type": "string",
                "description": "Version of the models to be used"
              },
              "models_count": {
                "type": "number",
                "description": "Number of models to integrate"
              }
            },
            "additionalProperties": false,
            "required": [
              "timeout",
              "version",
              "models_count"
            ]
          }
        },
        "additionalProperties": false
      }
    },
    
    "debug_api": {
      "/**": "Debug API integration component",
      "name": "Debug API Integration",
      "description": "Debugs and integrates multiple API configurations and models",
      "version": "0.0.1",
      "key": "debug-api-integration",
      "props": {
        "configurations": {
          "type": "string[]",
          "label": "API Configurations",
          "description": "List of API configuration objects containing name/value pairs"
        },
        "memorySiloId": {
          "type": "integer",
          "label": "Memory Silo ID",
          "description": "Unique identifier for the memory silo"
        },
        "ioSocketId": {
          "type": "integer",
          "label": "I/O Socket ID",
          "description": "Unique identifier for the I/O socket"
        },
        "apiKey": {
          "type": "string",
          "label": "API Key",
          "description": "API key for authentication"
        },
        "modelTypes": {
          "type": "string[]",
          "label": "Model Types",
          "description": "List of model types to integrate",
          "options": ["davinci", "gpt-3o-mini-high", "gpt-3o-mini", "5o-flagship"]
        },
        "timeout": {
          "type": "integer",
          "label": "Timeout",
          "description": "Timeout duration in milliseconds"
        },
        "version": {
          "type": "string",
          "label": "Version",
          "description": "Version of models to use"
        },
        "modelsCount": {
          "type": "integer",
          "label": "Models Count",
          "description": "Number of models to integrate"
        }
      }
    }
  }
}

(_/)
(^.^)
/> 🔧

import { axios } from "@pipedream/platform"
import openai from "@pipedream/openai"

export default defineComponent({
key: "openai-create-assistant",
name: "Create Assistant",
version: "0.0.1",
description: "Create an OpenAI assistant",
props: {
openai,
model: {
type: "string",
label: "Model",
description: "The model to use",
async options() {
const models = await axios($, {
url: "https://api.openai.com/v1/models",
headers: {
Authorization: `Bearer ${this.openai.$auth.api_key}`,
},
})
return models.data.map(model => model.id)
}
},
instructions: {
type: "string",
label: "Instructions",
description: "Instructions limited to 512 characters",
optional: true,
max: 512
}
},
async run({ $ }) {
return await axios($, {
url: "https://api.openai.com/v1/assistants",
method: "POST",
headers: {
Authorization: `Bearer ${this.openai.$auth.api_key}`,
"Content-Type": "application/json",
},
data: {
model: this.model,
instructions: this.instructions?.substring(0, 512),
},
})
},
})

115 changes: 115 additions & 0 deletions .github/workflows/dotnet-desktop.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# This workflow will build, test, sign and package a WPF or Windows Forms desktop application
# built on .NET Core.
# To learn how to migrate your existing application to .NET Core,
# refer to https://docs.microsoft.com/en-us/dotnet/desktop-wpf/migration/convert-project-from-net-framework
#
# To configure this workflow:
#
# 1. Configure environment variables
# GitHub sets default environment variables for every workflow run.
# Replace the variables relative to your project in the "env" section below.
#
# 2. Signing
# Generate a signing certificate in the Windows Application
# Packaging Project or add an existing signing certificate to the project.
# Next, use PowerShell to encode the .pfx file using Base64 encoding
# by running the following Powershell script to generate the output string:
#
# $pfx_cert = Get-Content '.\SigningCertificate.pfx' -Encoding Byte
# [System.Convert]::ToBase64String($pfx_cert) | Out-File 'SigningCertificate_Encoded.txt'
#
# Open the output file, SigningCertificate_Encoded.txt, and copy the
# string inside. Then, add the string to the repo as a GitHub secret
# and name it "Base64_Encoded_Pfx."
# For more information on how to configure your signing certificate for
# this workflow, refer to https://github.com/microsoft/github-actions-for-desktop-apps#signing
#
# Finally, add the signing certificate password to the repo as a secret and name it "Pfx_Key".
# See "Build the Windows Application Packaging project" below to see how the secret is used.
#
# For more information on GitHub Actions, refer to https://github.com/features/actions
# For a complete CI/CD sample to get started with GitHub Action workflows for Desktop Applications,
# refer to https://github.com/microsoft/github-actions-for-desktop-apps

name: .NET Core Desktop

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]

jobs:

build:

strategy:
matrix:
configuration: [Debug, Release]

runs-on: windows-latest # For a list of available runner types, refer to
# https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idruns-on

env:
Solution_Name: your-solution-name # Replace with your solution name, i.e. MyWpfApp.sln.
Test_Project_Path: your-test-project-path # Replace with the path to your test project, i.e. MyWpfApp.Tests\MyWpfApp.Tests.csproj.
Wap_Project_Directory: your-wap-project-directory-name # Replace with the Wap project directory relative to the solution, i.e. MyWpfApp.Package.
Wap_Project_Path: your-wap-project-path # Replace with the path to your Wap project, i.e. MyWpf.App.Package\MyWpfApp.Package.wapproj.

steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0

# Install the .NET Core workload
- name: Install .NET Core
uses: actions/setup-dotnet@v4
with:
dotnet-version: 8.0.x

# Add MSBuild to the PATH: https://github.com/microsoft/setup-msbuild
- name: Setup MSBuild.exe
uses: microsoft/setup-msbuild@v2

# Execute all unit tests in the solution
- name: Execute unit tests
run: dotnet test

# Restore the application to populate the obj folder with RuntimeIdentifiers
- name: Restore the application
run: msbuild $env:Solution_Name /t:Restore /p:Configuration=$env:Configuration
env:
Configuration: ${{ matrix.configuration }}

# Decode the base 64 encoded pfx and save the Signing_Certificate
- name: Decode the pfx
run: |
$pfx_cert_byte = [System.Convert]::FromBase64String("${{ secrets.Base64_Encoded_Pfx }}")
$certificatePath = Join-Path -Path $env:Wap_Project_Directory -ChildPath GitHubActionsWorkflow.pfx
[IO.File]::WriteAllBytes("$certificatePath", $pfx_cert_byte)

# Create the app package by building and packaging the Windows Application Packaging project
- name: Create the app package
run: msbuild $env:Wap_Project_Path /p:Configuration=$env:Configuration /p:UapAppxPackageBuildMode=$env:Appx_Package_Build_Mode /p:AppxBundle=$env:Appx_Bundle /p:PackageCertificateKeyFile=GitHubActionsWorkflow.pfx /p:PackageCertificatePassword=${{ secrets.Pfx_Key }}
env:
Appx_Bundle: Always
Appx_Bundle_Platforms: x86|x64
Appx_Package_Build_Mode: StoreUpload
Configuration: ${{ matrix.configuration }}

# Remove the pfx
- name: Remove the pfx
run: Remove-Item -path $env:Wap_Project_Directory\GitHubActionsWorkflow.pfx

# Upload the MSIX package: https://github.com/marketplace/actions/upload-a-build-artifact
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: MSIX Package
path: ${{ env.Wap_Project_Directory }}\AppPackages
Loading