BricksForge Assistent

Seems like a good option for BricksForge! The API is now available!

openai-lockup@20x

Starting today, you can build apps and products with our new ChatGPT and Whisper APIs. You can read more about the APIs and early partners’ use cases here.

ChatGPT API

The new Chat API calls gpt-3.5-turbo, the same model used in the ChatGPT product. It’s also our best model for many non-chat use cases; we’ve seen early testers migrate from text-davinci-003 to gpt-3.5-turbo with only a small amount of adjustment needed to their prompts. Learn more about the Chat API in our documentation.

Pricing
It’s priced at $0.002 per 1K tokens, which is 10x cheaper than the existing GPT-3.5 models.

Model updates
We are constantly improving our Chat models, and want to make these improvements available to developers as well. Developers who use the gpt-3.5-turbo model will always get our recommended stable model, while still having the flexibility to opt for a specific model version.

For example, today we’re releasing “gpt-3.5-turbo-0301”, which will be supported through at least June 1st, and we’ll update gpt-3.5-turbo to a new stable release in April. The models page will provide switchover updates.

Whisper API

We’ve been thrilled to see the community response to Whisper, the speech-to-text model we open-sourced in September 2022. Starting today, the large-v2 model is available through our API as whisper-1 – making it the fastest, cheapest, and most convenient way to use the model. Learn more about the Whisper API in the documentation.

Pricing
The API is priced at $0.006 / minute, rounded up to the nearest second.

Changes in our policies and how we use your data

Over the past six months, we’ve been collecting feedback from our API customers to understand how we can better serve them. We’ve made a number of concrete changes, such as:

  • Data submitted through the API is no longer used for model training or other service improvements, unless you explicitly opt in
  • Implementing a default 30-day data retention policy for API users, with options for shorter retention windows depending on user needs
  • Removing our pre-launch review – unlocked by improving our automated monitoring
  • Simplifying our Terms of Service and Usage Policies, including terms around data ownership: users own the input and output of the models

Check out what our early partners like Snap, Shopify, and Instacart have built with the new APIs – and start building next-generation apps powered by ChatGPT & Whisper today.

—The OpenAI team

5 Likes

Nice intro by 1littlecoder

2 Likes

Cool! In which areas of Bricksforge could you guys imagine this?

2 Likes

Right now I see your big differentiator as the GSAP animations interface that you have built. I’m a total beginner with GSAP and still trying hard to get the hang of it.

If I could avoid having to watch all the Green Socks / GSAP tutorials and it could just suggest some “Animation Objects” in particular, that would be very helpful.

1 Like

Here are some rephrased prompt ideas for an integration into a terminal:

  • Provide answers to questions related to HTML, PHP, and JavaScript, allowing users to quickly troubleshoot issues or gain insight into specific coding techniques.
  • Offer prompts for generating text ideas, such as headlines or descriptions, to help users with content creation.
  • Allow users to input design ideas and receive terminal code to build the structure of their design. This could help streamline the design process and make it easier to implement custom designs.
  • Provide prompts for generating GSAP code, our beloved animation library.

In addition to these prompts, a help icon could be included to trigger the terminal to answer questions in specific places.

To add flexibility and enable the sharing of best solutions, prompt templates could be used and triggered with a specific keyword. Users could also have the option to add custom prompts, further enhancing the tool’s flexibility and usability.

In addition to the help icon, another feature that could be integrated into the terminal is a small microphone icon that uses whisper (Open AI speech-to-text) technology. This would allow users to dictate commands or ask questions without having to type them out manually. The whisper feature is designed to work in a conversational manner, meaning that it can recognize and transcribe speech in a more natural way than traditional speech-to-text software. This feature could be particularly useful for users who prefer speaking to typing or who may have limited mobility or dexterity. By combining both visual and voice recognition features, the terminal would provide a more comprehensive and accessible user experience.


When considering the cost of the integration into the terminal, there are two possible options. The first option is to offer a token subscription through BricksForge. This would provide users with a convenient way to access the integration while also ensuring that the tool remains sustainable and up-to-date. The token subscription could be offered at a reasonable price point that reflects the value of the tool and the cost of maintaining it over time.

The second option is to let users pay for tokens directly, using their own Open AI API key. This would give users greater control over their usage and spending, as they would only be charged for the tokens they use. However, this option would require more setup and configuration on the user’s end, as they would need to obtain their own Open AI API key and set up billing and payment information.

Ultimately, the choice between these two options will depend on the needs and preferences of the user base. Offering a token subscription through BricksForge would provide a more streamlined and user-friendly experience, while allowing users to pay for their own tokens would provide greater flexibility and control. Regardless of which option is chosen, it is important to ensure that the cost is fair and reasonable, in order to encourage adoption and support long-term sustainability.

5 Likes

Thankful, @MaxZieb !!! :heart:

1 Like

Here is a simple playground testcase for emmet syntax. A tiny bit of priming and the return is already awesome and pretty consistent. Add more bricks specific rules if needed.


It used some classes in the second answer. These could potentially be guided by providing them in system

This could be one of the presets and we can create more by our self…

2 Likes

Here is a simple playground testcase for CSS syntax. In this case there would be some prompt engineering involved. Meaning the user prompt would be constructed:

selector: {enter selected element id/class} task: {user request}

Another good thing would be to set some metrics of the current site in the site settings

It could potentially even pick the class or id base on user input…

I had some glitches when it returns Markdown code syntax arround the CSS, but tellingit not do do it and as a fallback a strip function should do the tricky

Here are some experiments for selection of helpful content from the academy. Just wanted to demonstrate that it can also be used to select an URL (or any other option) based on a list of stuff provided in “System”…

One could also just do a search in the academy with a keyword and return the results to a new system context.

2 Likes

Additional idea :bulb:

What if you implemented a form processing feature powered by GPT-3? This would enable the creation of instant apps with multiple capabilities. Users could simply fill out a form and receive a response based on their input values or utilize GPT-3 as an intermediary step to process fields. To make this even more appealing, you could include tokens in each recurring plan. Also allow users to insert their own API keys, especially if they plan to use GPT-3 in a business context or build a tool around it.

Providing some tokens as part of the subscription package would sweeten the deal and make it more enticing for users to subscribe.

Sidenote :spiral_notepad:

You get even better, more consistent results with something like this (in prompts or system):

Use this template:
Photography in the style of Louise Dahl Wolfe 
"{TITLE}": {DESCRIPTION}

Example:
Photography in the style of Louise Dahl Wolfe 
"Medieval Market": The image shows a bustling market in a medieval town. There are merchants selling all kinds of goods, including food, clothing, and household items. People are haggling over prices and chatting with each other. In the background, you can see the town square and a church. The scene is full of life and energy, and you can almost hear the sounds of the market. 

Task:
[PUT THE USER REQUEST HERE]

I have been using templates like these for Image AI … the recipe is easy:

  • Provide a template
  • Add an example (or multiple, few shoot learning)
  • Finish of with a new task/topic (and a limit for the number of responses)

The results are very stable and consistent.


Nice little demo for a JS implementation ( although that exposes the API key) … still fun to watch ChatGPT API Walkthrough - YouTube

3 Likes

Also, excellent at creating animation objects (explanations option)

3 Likes

Incredible stuff going here.
My little opinion.
If you incorporate this assistant iI can see myself using it for hrlping me with GSAP animations and also for few more advanced css stuff.

I’m also usung chatgpt in this moments to writer little php functions, but I’m not sure if this have to be part of BricksForge assistant.

What about the ability to use the assistant inside the “Code” element or “Custom CSS” code area as well with a specified prefix?

/assist Create a pulse animation effect for element .test123

2 Likes

If you already have an amazing terminal and want users to focus on that concepts (double down on that), it’s best not to introduce too many “hidden” features. One solution is to add an icon to places where there is an assist function. Clicking on the icon will open the terminal for input with a responses interface (responses appear below), and you can insert the response you want. This way, the icon will open a context and you’ll know which prompt-script to use (to get the desired responses) and where to insert it (as the call origin is connected to the assist request by the user clicking the icon)!

image
Just an arbitrary sample icon, maybe to intricate for small sizes.

I absolutely love your product and think you have a solid foundation. However, with so many features already included, it’s important to focus on documentation, fine-tuning, and providing assistance to users. In my opinion, an assistant would be the ultimate helper and ensure that your product is fully realized even for beginners and less code-savvy people.


If an assist request is running in terminal you get a little icon and the script name being used to construct the prompt behind the scenes (maybe user can create their own down the line! :wink: )

3 Likes

Fun video to watch new Systems field and the idea of adjusting it in context like suggested above. First look at ChatGPT API - the age of Autonomous AI begins TODAY! Cognitive Architectures ahoy! - YouTube

1 Like

CleanShot 2023-03-05 at 08.04.23@2x

Here is an idea to allow prompt templates that pull in external data by tag.

google(query): This command uses the Google Search API to perform a Google search for the query and return the top search result. The result is a URL to the webpage that Google has determined to be the most relevant to the query.

More web development lookups:

  1. caniuse(query): This command uses the Can I Use database to check the browser compatibility of a feature in HTML, CSS, or JavaScript. Example usage: caniuse("flexbox")

  2. mdn(query): This command uses the Mozilla Developer Network to provide documentation for a given HTML, CSS, or JavaScript feature. Example usage: mdn("Array.map")

  3. npm(query): This command uses the npm registry to search for a package and provide information about it, such as its version, description, and dependencies. Example usage: npm("react")

  4. github(query): This command uses the GitHub API to search for a repository and provide information about it, such as its name, description, and stars. Example usage: github("bootstrap")

  5. stackoverflow(query): This command uses the Stack Overflow API to search for questions and answers related to a programming problem or topic. Example usage: stackoverflow("how to remove duplicates from an array in JavaScript")

  6. css-tricks(query): This command uses the CSS-Tricks website to search for articles and tutorials related to CSS. Example usage: css-tricks("how to create a responsive layout with CSS grid")

  7. wikipedia(query) : This command uses the Wikipedia API to search for and return the summary section of the Wikipedia page for the given query.

  8. fetch(url, limit=None) : This command uses the Python Requests library to fetch the content of the webpage at the given URL. By default, the command returns the entire content of the webpage. However, if a limit is specified, the command will return only the first limit characters of the webpage content.

stripped_query: This variable holds the stripped-down version of the user’s original query, with filler words removed using natural language processing techniques. The variable is available to all other commands and can be used as a more focused and targeted search query for information related to the original query.

The stripped_query variable can be useful for avoiding repetition and reducing the amount of code needed to perform similar searches across different commands. Instead of repeating the same natural language processing techniques to strip filler words from the original query in every command, the stripped_query variable can be used as a pre-processed query that has already been stripped of filler words. This can save time and reduce the likelihood of errors or inconsistencies in the processing of the original query.

1 Like

Another great template for system (I was a little deep down the rabbit hole in the previous post already). This should be a nice starter system message:

You are a CSS expert who loves to assist others! Given the following code snippets and explanations, answer the question using only that information, outputted in markdown format. If the answer is not explicitly provided in the given information, please say "Sorry, I don't know how to help with that."

Context sections: {{contextText}}

Question: {{sanitizedQuery}}

Answer as markdown (including related code snippets if available as code blocks)

{{contextText}} would be the previous CSS or what ever is relevant to the question
{{sanitizedQuery}} query the user question


If you want to force only a CSS/JS answer, one could do something like:

Answer as pure CSS (including related comment as CSS comments)

or if comments are disabled (toogle)

Answer as pure CSS (no comments)

The same goes for JS.

Any updates on this? Is it on the Roadmap? Certainly, is the future…

1 Like

Thanks so much for your great input, @MaxZieb. This is definitely on the list! I’ve saved this post to pull up when it gets going. Currently I have a few other things that have a higher priority to 1.0. But I’m really looking forward to the AI connection!

1 Like

That is good to know… If you need any help with that phase (Prompts or testing) please let me know. I am more than happy to help and make this the best integration so far!

1 Like

I happen to follow you on Twitter, Max!
@Daniele this guy is the guy if you need a guy who is an ai guy.
(Heathen when it comes to crypto though :stuck_out_tongue: )

1 Like

Thanks … just reading the news and was fortunate to get a playground account early on at Open AI… beginning of last year.

That’s right, crypto is the worst Ponzi scam ever… but let’s not derail this thread…

2 Likes