GenAI artificial-intelligence prompt-engineering
As we continue our journey into the uncovering Generative AI (GenAI), given we have some basics under our belt, it is always beneficial as developers for us to build something tangible and get our hands dirty. In this post, we will dive into building our first single-purpose GenAI application. This will be a lightweight tool that leverages a local LLM to generate responses based on conversational context. Our tech stack would consist of .NET, and Semantic Kernel. We would use Ollama to run our local LLM. By the end of this post, you'll have a working example you can run entirely on your own machine.
GenAI artificial-intelligence
The pace at which GenAI has exploded and gone from novelty to necessity is also reflected in the landscape of tools, platforms, communities, breakthroughs, etc. The field is evolving so rapidly that staying up to date can easily feel like a full time job. Hence, this blog focuses on the current state of the ecosystem to help readers get started.
GenAI artificial-intelligence
Many products are actively finding innovative use-cases for helping their customer base with Generative AI aka GenAI. As software engineers and architects, we are tasked to build the applications and integrations which sit behind the scenes powering these GenAI use-cases. A core belief that guides this approach is that establishing a well-formed understanding of a technology before integration helps to uncover its maximum benefit. For those working on a similar belief or in general like to go down the rabbit-hole, this blog is the first of the series to uncover the basics of GenAI (i.e., remove the "buzz" from the buzzwords).