This article will explore the different ways to maintain “state”—the memory of your React application.
Choosing the right state management approach can be the difference between a clean, scalable app and a “prop-drilling” nightmare. This guide will compare the four most common approaches: useState, useReducer, Context API, and External Stores.
Managing State – Do’s and Do Not’s
When architecting your React app, follow these best practices to keep your data flow predictable.
DO keep state as local as possible. If only one component needs it, use useState.
DO use useReducer when one state change affects multiple sub-values (like a form) or when the state value is a result of complex logic or calculations.
DON’T put everything in a global store; it makes debugging harder and can slow down performance.
DO lift state up to the nearest common ancestor when two sibling components need the same data.
DO return a new object in your Reducer rather than mutating the old one.
Method
Best For…
Complexity
Scaling
useState
Local UI toggle, simple inputs
Low
Limited
useReducer
Complex logic, related state pieces
Medium
Medium
Context API
Theming, User Auth, Language
Medium
Medium
External
Complex data, high-frequency updates
High
Excellent
Defining the Scenarios
Our comparison focuses on four specific use cases. This post highlights the implementation of the first three, with a deep dive into external stores coming in a later post.
Local State: Managing a simple toggle or text input.
Complex Local State: Managing a character’s stats (Health, Mana, XP) in a game.
Shared State: Passing a “Dark Mode” setting across the entire app.
Complex Global State: Managing a shopping cart with persistent storage.
Implementation Guide
1. Local State with useState
Most of your state should live here. It’s fast, simple, and built into React.
JavaScript
import React, { useState } from'react';functionCounter() {// Define a state variable called "count"const [count, setCount] =useState(0);return ( <div> <p>You clicked {count} times</p> <buttononClick={() =>setCount(count +1)}> Click me </button> </div> );}
2. The “Mission Control” with useReducer
When your state logic gets hairy—like a game character with multiple stats—useReducer is your best friend. It separates the “what happened” (Action) from the “how to update” (Reducer).
JavaScript
// The Reducer function handles all the logic outside the component, in one placefunctionreducer(state, action) {switch (action.type) {case'attacked': return { ...state, hp: state.hp -10 };case'heal': return { ...state, hp: state.hp +10 };default: return state; }}const [state, dispatch] =useReducer(reducer, { hp:100 });// Trigger it anywheredispatch({ type:'attacked' });
Pros: Centralized logic, easier to test, great for complex objects.
Cons: More boilerplate code than a simple useState.
3. Global Strategy with Context API
When you need to share data without passing props manually through every level, use Context.
JavaScript
const ThemeContext = React.createContext('light');// ... wrap your app in <ThemeContext.Provider>const theme =useContext(ThemeContext);
Pros: Included in React, avoids prop-drilling.
Cons: Can cause unnecessary re-renders if not managed carefully.
Which one should you use?
Open your code editor and look at your component tree.
If the data is a single value: useState.
If the data is an object with related properties: useReducer.
If the data is “Global” (User info, Theme): Context API.
If the data is massive and changes constantly: Zustand or Redux.
The result of picking the right tool might not look “exciting” in the code itself, but your future self—who has to debug this in six months—will definitely give you a pat on the back!
Alternative state management frameworks such as Redux will be addressed in a future post. Keep your eyes peeled!
If there is one thing I have learnt during my career in the IT industry, it’s that the industry is a fickle beast. Trends and fashions come and go. Languages fall by the wayside(hey COBOL74!). How often have you read an article declaring a new framework a “game changer”, only to realise that after using it in anger it does a fraction of what a venerable equivalent does in it’s sleep?
In this article I’m going to cover something that has not changed and has not gone out of fashion. It crops up again and again.
If there’s one thing you need to learn and more importantly USE, as a software engineer it is encapulated(see what I did there?) in these 5 principles. But hey, enough of my yakkin’, whaddaya say? Let’s boogie!
The SOLID principles are a set of five design guidelines in object-oriented software development that help engineers create systems that are easy to maintain, scale, and understand. Introduced by Robert C. Martin, these principles aim to reduce “code rot” and make software more robust.
1. Single Responsibility Principle (SRP)
“A class should have one, and only one, reason to change.”
This principle states that a component should perform a single function. When a class handles multiple unrelated tasks, it becomes fragile. A change in one task might accidentally break another. You might be tempted to add a small related function, but don’t do it. Do what is right and create a new class even if it has one function. Smaller classes are great. Less dependencies, easier to test. What’s not to like?
Example: Imagine a User class that handles both user data and saving that data to a database. If you change your database schema, you have to modify the User class.
Better Approach: Create a User class for data and a UserRepository class for database operations.
2. Open/Closed Principle (OCP)
“Software entities should be open for extension, but closed for modification.”
This somewhat opaquely named principle states that you should be able to add new functionality to a system without changing existing code. This prevents bugs from being introduced into parts of the application that are already working. It comes down to my tenet of minimal code change. Remember every code change has possibility to introduce bugs!
Example: A Discount class that uses a series of if/else statements to check for “VIP” or “Seasonal” discounts. Adding a new discount type requires changing the existing logic.
Better Approach: Use an interface or abstract class DiscountStrategy. Each new discount type becomes a new class that implements this interface.
3. Liskov Substitution Principle (LSP)
“Subtypes must be substitutable for their base types.”
Barbara Liskov is a pioneer who fundamentally changed how we write and organize code. Before her work in the 1970s, code was often a messy “spaghetti” of instructions. Liskov pioneered the concept of Data Abstraction. She led the team that created CLU, a programming language that introduced the idea of “abstract data types”—the direct ancestor of the “Classes” and “Objects” we use in almost every modern language like Java, Python, and C++. I hope you enjoyed that little history lesson. Let’s proceed.
This principle states that if a program is using a base class, it should be able to use any of its subclasses without knowing it or causing errors. The subclass must honor the “contract” of the parent class.
Example: A classic violation is the “Square-Rectangle” problem. If a Square inherits from Rectangle but throws an error when the height and width are set to different values, it breaks the program’s expectations.
Better Approach: If a subclass cannot perform the actions of the parent in the same way, they likely shouldn’t share that specific inheritance hierarchy.
4. Interface Segregation Principle (ISP)
“Clients should not be forced to depend on methods they do not use.”
I’ve seen this many times! You have to implement an interface in order to use a specific API call. You do this but realise you have to implement functions you are not interested in, leading to the dreaded “not implemented” comment. This can be partly remedied by using the Adapter Pattern by the way if you come across it.
It is better to have many small, specific interfaces than one large, “fat” interface. This prevents implementing classes from being burdened with “dummy” methods that do nothing.
Example: An IMachine interface with Print(), Scan(), and Fax(). A basic Printer class would be forced to implement Scan() and Fax() even if it can’t perform those actions.
Better Approach: Break the interface into IPrinter, IScanner, and IFax machine.
5. Dependency Inversion Principle (DIP)
“Depend on abstractions, not concretions.”
High-level modules (the logic) should not depend on low-level modules (the tools). Both should depend on abstractions (interfaces). This “decouples” the code, making it easy to swap out components.
This is great for writing tests, and you should be writing tests, many, many tests!! It allows you to easily mock the dependencies.
Example: A NotificationService that directly creates an instance of EmailSender. If you want to switch to SMSSender, you have to rewrite the NotificationService.
Better Approach: The NotificationService should depend on an IMessageSender interface. You can then “inject” whichever sender you need at runtime.
Conclusion
At the end of the day, SOLID is about managing change. Requirements shift, APIs evolve, and businesses pivot.
By following these five principles, you aren’t just writing code for today; you’re leaving a map for the developer who has to touch this file six months from now. It turns software from a fragile house of cards into a robust, modular system.
Before I go, here is a test. Write some code. Store it away for a year. Look at your code. Is it still readable, understandable. Is is SOLID?
In our post, we mastered Mutations. We can now query, add, update, and delete films from our Hammer collection. However, every time we restart our Apollo server, our changes vanish into the ether. Our “Watched” list resets, and that film we deleted? It’s back from the dead—and not in a cool, technicolor, cinematic way.
To fix this, we need Data Persistence. In this post, we’ll swap our humble, local JavaScript array for a MongoDB database.
If you haven’t done so already, clone the lab Github repository using
gitclonehttps://github.com/jmwollny/lab.git
Install MongoDB
I’m installing on a Mac, if you what to install MongoDB on other systems go to the MongoDB download page here.
brewtapmongodb/brewbrewinstallmongodb-community
Now start the MongoDB server. This command ensures that the MongoDB server will restart at logon.
brewservicesstartmongodb/brew/mongodb-community
Now check we have a running instance. Type mongosh. If the shell appears you are golden and are ready to proceed to the next section. Type exit to leave the shell.
Setting Up the MongoDB connection
First, we need to install the MongoDB driver. In your terminal, run:
cdlab/graphql-tutorial-3npminstallmongoose
Mongoose is an Object Data Modeling (ODM) library that makes talking to MongoDB from Node.js much easier. If you open index.js you will see that the films array has been replaced with a MongoDB connection to a database called hammer_films.
const mongoose =require('mongoose');// Connect to your local or Atlas MongoDB instancemongoose.connect('mongodb://localhost:27017/hammer_films', { useNewUrlParser:true, useUnifiedTopology:true});const db = mongoose.connection;db.on('error', console.error.bind(console, 'connection error:'));db.once('open', () => console.log('Connected to MongoDB!'));
Defining the Data Model
In GraphQL, we have a Schema. In MongoDB (via Mongoose), we have a Model. These two need to mirror each other so our data flows correctly. A new file called Film.js contains the MongoDB model which has been exported so it can be shared by seed.js(more about this later!).
const mongoose =require('mongoose');const filmSchema =new mongoose.Schema({ title: { type: String, required:true }, year: { type: Number, required:true }, watched: { type: Boolean, default:false }});// Export the model so both index.js and seed.js can use itmodule.exports= mongoose.model('Film', filmSchema);
Updating the Resolvers
This is where the magic happens. Instead of using .find() or .splice() on a local array, we will use Mongoose methods which return Promises. GraphQL handles these asynchronous calls automatically.
The New Queries and mutations
const resolvers = { Query: {films:async (parent, args) => {// 1. Build a dynamic query objectlet query = {};// Watch filterif (args.watched !==undefined) { query.watched = args.watched; }// Year filter (Exact match)if (args.year) { query.year = args.year; }// Date range filter (using MongoDB operators $gte and $lte)if (args.where) { query.year = query.year || {}; // Initialize year object if it doesn't existif (args.where.year_gte) { query.year.$gte = args.where.year_gte; }if (args.where.year_lte) { query.year.$lte = args.where.year_lte; } }// Search filter (using Regex for case-insensitive partial match)if (args.searchTerm) { query.title = { $regex: args.searchTerm, $options:'i' }; }// Execute the query against the databasereturnawait FilmModel.find(query); },// Find by ID - Mongoose maps GraphQL 'id' to MongoDB '_id' automaticallyfilm:async (parent, args) =>await FilmModel.findById(args.id), }, Mutation: {addFilm:async (parent, { input }) => {// Create a new instance and save itconst newFilm =newFilmModel(input);returnawait newFilm.save(); },updateWatched:async (parent, { id, watched }) => {const updatedFilm =await FilmModel.findByIdAndUpdate( id, { watched }, { new:true }, // This flag returns the record *after* it was updated );if (!updatedFilm) {thrownewError('Film not found'); }return updatedFilm; },deleteFilm:async (parent, { id }) => {const deleted =await FilmModel.findByIdAndDelete(id);if (!deleted) {thrownewError('Film not found'); }returnawait FilmModel.find(); }, },};
Testing Persistence
Restart your server with node index.js. Now, head back to your GraphQL sandbox at http://localhost:4000/. We can test that after adding a film and restarting our Apollo server, the film still exists!
{"input": {"title":"The Brides of Dracula","year":1960,"watched":false }}
Run the query then shut down your server (Ctrl + C in the terminal). Start the server again usig node index.js
Run a query to retrieve all films.
queryGetAllFilms {films {idtitlewatchedyear }}
Query result
{"data": {"films": [ {"id":"69dfb8e8067cfb4bcaadeb6d","title":"The Brides of Dracula","watched":false,"year":1960 } ] }}
If all has gone well, your data is still there! Unlike our local array, MongoDB has written this data to the disk.
Why use Mongoose with GraphQL?
You might notice that our FilmModel and our GraphQL type Film look very similar. This redundancy is actually a strength. The GraphQL Schema acts as a contract for your frontend (telling it what data it can ask for), while the Mongoose Model acts as a gatekeeper for your database (telling it how the data must be stored).
The “ID” Gotcha
MongoDB uses a field called _id by default. GraphQL usually expects id. Mongoose is smart enough to provide a virtual id field that maps to _id, so the existing queries like film(id: "...") continue to work without a hitch.
Importing the full list of films
Let’s finish by importing our film list into MongoDB, then we can get down to the fun job of watching every one and marking them as watched as we go.
To do this I have provided a handy script. Running the script will clear the database and import all films. All you need to do is open a terminal and run node seed.js.
const mongoose =require('mongoose');const fs =require('fs');// Import your Mongoose modelconst Film =require('./models/Film'); constseedDatabase=async () => {try {// Connect to MongoDBawait mongoose.connect('mongodb://127.0.0.1:27017/hammer_films'); console.log("Connected to MongoDB for seeding...");// Read the JSON fileconst data =JSON.parse(fs.readFileSync('./films.json', 'utf-8'));// Clear existing filmsawait Film.deleteMany({}); console.log("Old records removed.");// Bulk insert the dataawait Film.insertMany(data); console.log(`${data.length} Hammer films successfully added to the database!`);// Close the connection process.exit(); } catch (error) { console.error("Error seeding database:", error); process.exit(1); }};seedDatabase();
This script does the following:
Connects to MongoDB
Parses the list of films(note: we do not need the id anymore in the JSON)
Deletes all records in the database
Inserts all records defined in the JSON file
Conclusion
We’ve successfully moved our Hammer database from a “temporary” state to a “permanent” one. By integrating MongoDB, we’ve laid the groundwork for a real-world application. We are no longer just playing with variables in memory. we are managing a persistent data store. As an exercise try creating different queries or if you are feeling brave add more fields to the schema. Have fun coding and if you feel inclined watching one of the suggested films 🙂
In our last post, we built a robust way to search through 157 Hammer classics. But what happens when you finally sit down to watch The Brides of Dracula? You need a way to update that record.
In GraphQL, any operation that changes data is called a Mutation.
If you have not followed part 1 of this tutorial go there now to pull the code from my Github repository.
1. Updating the Schema
First, we need to tell our server what these changes look like. We’ll add a Mutation type to our typeDefs. We have one to update a film entry(updateWatched) and one to delete a film(deleteFilm).
typeMutation { # Toggle the watched status of a film updateWatched(id:ID!, watched:Boolean!):Film # Delete a film from our collection deleteFilm(id:ID!): [Film]}
2. Writing the Resolvers
Now, we implement the logic to update and delete a film record. Since we’re working with a local array of films data we’ll use standard JavaScript array methods to find and modify our data. Here we are using splice and find.
const resolvers = {// ... query resolvers Mutation: {updateWatched: (parent, { id, watched }) => {const film = films.find(f=> f.id == id);if (!film) {thrownewError("Film not found"); } film.watched = watched;return film; },deleteFilm: (parent, { id }) => {const index = films.findIndex(f=> f.id == id);if (index ==-1) {thrownewError("Film not found"); }// Remove the film and return the updated list films.splice(index, 1);return films; } }};
3. Testing in the Playground
Once you restart your server, you can test these live.
nodeindex.js🚀Serverreadyathttp://localhost:4000/
Navigating to http://localhost:4000/ will redirect to the GraphQL sandbox. Click the ‘Query your server’ button and you will be presented with something like this.
First of all we need to find all unwatched Dracula films returning the id. Quick quiz! Do you remember how to craft the query? Here it is:
{"data": {"films": [ {"id":"70","title":"The Brides of Dracula" }, {"id":"102","title":"Dracula: Prince of Darkness" }, {"id":"115","title":"Dracula Has Risen from the Grave" }, {"id":"122","title":"Scars of Dracula" }, {"id":"125","title":"Countess Dracula" } ] }}
Pick a film and remember the ID. This will be used in the next step. In my case I will pick the first film which has an id of 70.
Mark “Brides of Dracula” (ID: 70) as watched:
Paste this query into the sandbox(remember to substitute your own id if it is different).
After running the update query run the GetNotWatched query again to check that the film is not in the list.
Removing a film
Let’s remove “Brides of Dracula”.
mutation {deleteFilm(id:"70") {title }}
If we run a query to return the film with id 70 GraphQL will now return null.
queryGetFilm {film(id:70) {idwatchedyear }}
Results from the query
{"data": {"film": null }}
Adding a film
Let’s add the film back!
When adding a record, passing four separate arguments (ID, Title, Year, Watched) can get messy. Instead, we define an input type in our schema to group them together.
Update the Schema
We create a new input which specifies which fields are mandatory when creating a new Film. In our case all fields must be specified(indicated by the “!”). This input spec is then specified in our addFilm mutation.
{"input": {"id":"70","title":"The Brides of Dracula","year":1960,"watched":false }}
After running the mutation you can run the GetFilm query, which will show the resurrected film in all its glory!
queryGetFilm {film(id:70) {idwatchedyear }}
Why use input types?
Using an input object instead of flat arguments makes your API much more maintainable. If you decide to add a director or studio field later, you only have to update the input type, rather than changing the signature of the mutation across your entire codebase.
Why “Mutation” instead of “Query”?
While you could technically change data inside a Query resolver, it’s a major “no-go” in the GraphQL world. Using the Mutation keyword tells the server (and other developers) that this operation has side effects. It also ensures that if you send multiple mutations in one request, they run serially (one after another) to prevent data race conditions.
Conclusion
We’ve come a long way from a simple JavaScript array of my favourite films. By implementing Mutations, we’ve transformed our Hammer dataset into a functional API. We can now:
Create new entries to keep our database growing.
Update existing records to track our viewing progress.
Delete entries to keep our data clean and accurate.
This “CRUD” (Create, Read, Update, Delete) cycle is the backbone of almost every application you use daily. While we are currently managing this data in local memory via a simple array, the patterns we’ve used here—Input Types, Non-Nullable arguments, and Serial Mutation execution—are the exact same patterns you would use when connecting to a production-grade database like MongoDB or PostgreSQL.
What’s Next?
Now that the backend logic is solid, the next logical step is to explore Data Persistence. In the next post, we’ll look at how to hook this GraphQL server up to a database so that our “Watched” status doesn’t disappear every time we restart the server!
For years, REST (Representational State Transfer) has been the standard for web services. However, as applications grow in complexity, developers often find themselves juggling dozens of endpoints and dealing with over-fetching data.
GraphQL is a query language for your API and a server-side runtime for executing those queries using a type system you define for your data. Instead of multiple “dumb” endpoints, GraphQL provides a single “smart” endpoint that can return exactly what the client asks for.
Why GraphQL?
No More Over-fetching: You get exactly the data you request—nothing more, nothing less.
Single Request, Multiple Resources: You can fetch data from different sources in one trip to the server.
Strongly Typed: GraphQL uses a schema to define what is possible, which acts as a contract between the frontend and backend.
Self-Documenting: Because of the schema, tools like GraphiQL allow you to browse the API structure effortlessly.
For this tutorial we will be working with a list of classic Hammer Studios films. Each film will have id, title, year and watched fields.
The code below shows example Schema and the Query definitions. The schema defines a Film as having four fields. Where the field type is suffixed with “!” it indicates that the field must not be null and will always return a value.
consttypeDefs = gql`typeFilm { id:ID title:String! year:Int watched:Boolean }inputFilmFilter { year_gte:Int year_lte:Int }typeQuery { # Return a list of films, optionally filtered by watched status, year, or search term in the title films(watched:Boolean, year:Int, searchTerm:String, where:FilmFilter): [Film] film(id:ID!):Film }`;
After the Schema we have queries defined. If you define a query without any parameters e.g. films: [Film] and try to use a parameter in your query GraphQL will complain…loudly with a GRAPHQL_VALIDATION_FAILED error.
Here we have defined two queries.
films(watched: Boolean, year: Int, searchTerm: String, where: FilmFilter): [Film] – return a list of Film objects. We can optionally have zero or all of the following query parameters – watched, year, searchTerm, where(this is used to support range queries on the year field)
film(id: ID!)– return a single Film. The id parameter MUST be specified
Getting Started: A Simple Implementation
I have created a Github repo for this tutorial. It is straightforward to follow. Once you have cloned the repository, open readme.md for instructions. Alternatively read on!
Once the dependencies have been installed you can run the Apollo server.
nodeindex.js
You may be thinking, okay I’ve defined the Schema and the Queries, where do I get the data from and how do I map the queries to the underlying datasource?
The list of films is a hard-coded array defined in index.js. In practice we would be calling out to one or more data sources to get this information.
To map and filter the queries, this is where resolvers come in.
Open a terminal
cdlab/qraphql-tutorial
open index.js. This file contains the Schema, Queries and Resolvers and starts the Apollo server. In a production environment these would be split out into different files. We are using a single file to keep things simple.
At the bottom this file you will see the resolvers definition. Inside the films arrow function we can create filters for each of our defined query parameters.
To filter the dataset we check for the presence of the query parameter and perform the filter using the built-in Javascript filter function. We make sure to use the filtered list in any filters that follow.
When we are done we just return the list to the server.
For those used to SQL these may look a little odd at first but they are quite straightforward once you get the hang of of the syntax.
A simple query
Let’s retrieve the full list of films. Open your browser at http://localhost:4000/ then click the Query your Server button. If all is well the sandbox will open. Paste the following query.
queryGetAllFilms {films {titlewatchedyear }}
GetAllFilms is the name we give to our query. It can be anything that succinctly describes our query! Next we indicate that we want to execute the films query and return a list with title, watched and year fields. Note: you need to supply at least one field to be returned in the output.
Well done, you have succesfully run your first GraphQL query 🙂 This is what is returned.
Using a filter in our query
Say we wanted all films containing the word “dracula” that were made in the 1970s and that we haven’t watched.
In order to specify a range we need to define some extra variables to support the query. In our case we need year_gte and year_lte to define our bounds.
inputFilmFilter { year_gte:Int year_lte:Int}
We then define a where query parameter that uses the FilmFilter
A best practice when it comes to GraphQL is to separate the query data from the query itself. This is accomplished using variables. In the sandbox the variables JSON can be entered in the area underneath the query text box. Our new query will look like this.
After the query name we pass in the list of variables that we will provide values for, along with their type. $where: FilmFilter, $searchTerm: String, $watched: Boolean). In the film query instead of declaring the values we provide placeholders prefixed by ‘$’.
All that remains is to provide the query with solid values. The JSON will look like this.
While GraphQL is powerful, it isn’t always the “REST-killer.”
Use GraphQL When…
Use REST When…
You have complex, nested data requirements.
Your app is simple with few resources.
You support multiple clients (Web, iOS, Android) with different data needs.
You need standard HTTP caching mechanisms.
You want to aggregate data from multiple microservices.
You are building a very small, lightweight microservice.
Final Thoughts
GraphQL shifts the power from the server to the client. By allowing the frontend to dictate the data structure, it speeds up development cycles and reduces the payload sent over the wire. Once you get to grips with the extra boilerplate and query syntax, it is surprisingly easy to use.
You may now be asking, “well, this is all well and good, but how do I update or delete records from the database?” Well, dear reader that is the topic for my next article.
In this post I will look at the different types of agentic systems. AI is moving fast and it is easy to become confused with the constantly evolving technologies. Let’s start!
There are two main types 1. Workflows 2. Agents
Workflows
A workflow is a series of steps that follow a predefined, rigid path. Predictability is high because you know exactly what the system will do, but this comes at the expense of flexibility. If one steps breaks then the whole process if likely to fail. There are five main types of workflow.
Prompt Chaining
This is probably one of the most common types of workflow out there. Given an input the LLM(Large Language Model) carries out a task and optionally hands the results to some code which will transform or clean the results before passing to another LLM, then these results can be passed to the next LLM in the chain and so on…
The key is that you are breaking down a complicated single task into smaller, manageable steps.
Routing
The routing workflow is where an initial router LLM analyzes and categorizes an incoming query and directs it to the most appropriate specialized sub-task.
Parallelization
The input is passed to a coordinator (code) that breaks the task into independent pieces. These run simultaneously across multiple LLMs. The last sub-task is not an LLM but some code that will take the results and aggregate them. This is best for speed and processing large volumes of data.
Orchestrator/worker
Here, an LLM acts as the manager. It dynamically decides which sub-tasks are needed and assigns them to “workers”. The Orchestrator then synthesizes the various results into a final response. This is more flexible than standard parallelization because the “manager” adapts to the complexity of the query.
Evaluator
In this workflow you have two LLMs in a feedback loop. One is the generator and one in the evaluator. The first LLM takes the initial user prompt and creates a draft. The second one reviews the draft against a set of given criteria and provides detailed feedback. The generator receives the feedback and produces a second version. This continues until the evaluator produces a pass or has hit it’s “max loops” limit. Without a “max loops” limit, an Evaluator and Generator can sometimes get stuck in an infinite loop (and burn your API budget!!).
Agents
Unlike workflows, Agents use a reasoning loop to determine their own path. They are characterized by their ability to use Tools—like searching the web or executing code to solve open-ended problems.
The LLM gets to choose it’s own design and plot it’s own path to choose how it will solve the problem. This power makes them very powerful but less predictable.
What are the drawbacks?
Unpredictable path – you do not know which order the sub-tasks will be run but tools are used to give an agent boundaries
What quality will the output be?
Unpredictable costs, you don’t know how long it will take to run
Mitigations
Monitoring – it is essential have the visibility to understand what interactions are going on
Guardrails – protection to ensure models are doing what they should be doing, safely, consistently and within the given boundaries
Conclusion
In the current landscape of AI engineering, Workflows remain the most popular choice for production-grade applications. This is because businesses value reliability and cost-control. Patterns like Prompt Chaining and Routing allow developers to build systems that are fast, explainable, and easy to debug. If you are building a customer support bot or an automated report generator, a structured workflow is usually your best bet.
However, the industry is rapidly shifting toward Agents and Evaluator-Optimizer loops for high-stakes or creative tasks. While more “expensive” in terms of compute and time, these systems provide a level of quality and autonomy that simple chains cannot match. They are becoming the standard for coding assistants, research tools, and complex problem-solving.
The Rule of Thumb
Use Workflows when the process is well-defined and you need 100% consistency.
Use Agents when the task is open-ended and the path to the solution is too complex to map out by hand.
Sorry it has taken me so long to continue with this series. There were little things that got in the way such as C*****19, and going through redundancy but lets put those little things aside and recap. Last time we created a web service using node Express which will be used to capture environmental data from our Raspberry PI Sense Hat.
In this article we are going to hook things up by sending the data collected from the Raspberry PI, to our web service. We will also be updating our endpoints to handle the data correctly. Let’s get started!
First of all open the collector.py file.
We are going to POST the data to our web service endpoint. Find the line where we are checking if we have reached the interval and replace it with the code shown here.
if minute_count ==MEASUREMENT_INTERVAL: # Create the payload object payload = { 'date':dt.strftime("%Y/%m/%d %H:%M:%S"), 'temperature':round(temp_c,2), 'pressure':round(sense.get_pressure(),2), 'humidity':round(sense.get_humidity(),2), } data = urllib.urlencode(payload) request = urllib2.Request(END_POINT,data) response = urllib2.urlopen(request).read() print(payload) minute_count =0;
We are using a couple of Python libraries called urllib and urllib2 to do the heavy lifting of encoding our payload and sending it across to our Node.js server.
All that is left is to add the new endpoint to our Node.js server to process the request and update the list to return an actual list of weather data. Exciting eh! Open up another terminal session and navigate to the server directory. Using your editor of choice open up the index.js file.
Update the endpoints as shown below.
// Provide service meta dataapp.get('/api/environment/meta', (req,res) => { res.header("Access-Control-Allow-Origin", "*"); res.send({ averageTemp:averageTemp, count:data.length, lastEntry:lastEntry } );} );// List all entriesapp.get('/api/environment/entries', (req,res) => { res.header("Access-Control-Allow-Origin", "*"); res.send(data);} );app.post('/api/environment', (req,res) => {if (!isValid(req.body)) { res.status(400).send('Invalid request, required fields missing.');return; } const count = data.length +1; const entry = {id:count, date:req.body.date, temperature:req.body.temperature, pressure:req.body.pressure, humidity:req.body.humidity } lastEntry = entry; total += parseFloat(req.body.temperature); averageTemp = total / count; data.push(entry); res.json(entry);} );
You may recall last time we added a dummy /api/environment/entries endpoint which simply returned an empty array.
Let’s flesh this out. The endpoint is defined as a POST method which means data is sent as part of the body of the request. We validate that we do indeed have a body then update the count metric. We then build a JSON object by pulling out the parts of the request we are interested in. Finally we update the lastEntry variable, work out the average temperature to date before updating our list.
With these changes in place we can run our collector and Node.js server to see the end to end implementation working in all its glory. I would recommend opening two separate terminals and laying them out side-by-side.
In the terminal for the Python collector start the data harvest using the command python collector.py. On your PI you should see regular temperature updates on the matrix display.
Weather station running in the Raspberry PI
In the second terminal ensure you are in the collector/server directory and start the Node.js server using the command node index.js. If all is well you will see the message Listening on port 3000.
Terminals sessions showing the collector and Node.js server running on the PI
After a while you will see entries printed in the server console indicating that the weather data has collected from the PI and sent it to our server.
Now comes the exciting bit. We can try out our new endpoints. Open a new browser tab and check the new endpoints are functioning correctly.
The new endpoints shown using the RESTED Chrome extension
Well there we have it. A simple way of using your PI to collect weather data. I hope this has been useful and inspired you to create your own projects using the PI!!
In the last article we created a Python script to collect environmental data from a Sense Hat equipped Raspberry PI.
This article will add to that by creating a web service that will display all logged entries. In the next blog post we will add the ability to upload data from PI to the web service.
This web service will be running on the Raspberry PI but of course it could run anywhere as long as it supplies an endpoint to enable consumers to use it.
Building a RESTful API – do’s and do not’s
The web service will use RESTful principles. REST is a set of best practises to use when designing an API. In a nutshell:
DO return JSON
DO set the Content-Type header correctly i.e. application/json. Note, when using the PATCH method the content type must be application/merge-patch+json
DON’T use verbs e.g. use /songs instead of listSongs/
DO use plurals e.g. /api/songs/2019
DO return error details in the response body
DO make use of status codes when returning errors
400-bad request, 403-forbidden, 404-not found, 401-unauthorised, 500 server error
/api/environment/meta – to retrieve metadata such as number of entries, average temperature and last entry that was uploaded
Creating the web service using Express
Let’s get started! Connect your PI to the network either wirelessly or using a cable. I use an Ethernet cable directly plugged it into my laptop.
Power up your PI!
SSH into your PI. I used PuTTY
Navigate to the collector directory we created the last blog post.
mkdir server
cd server
We are going to use Node.js to create our server. Node.js is based on the Chrome V8 Javascript engine but has added modules to deal with IO, HTTP and many more. It is basically a Javascript engine wrapped in a C++ exe. It has a single-threaded handler which hands off requests in an asynchronous manner so it is very suitable to quick high-throughput requests.
Out of the box it is very easy to create a simple REST API. We will be using another node module called express which makes managing routing much easier.
When you have successfully installed node and npm return to the server directory we created earlier. Now we can install express which is a lightweight framework for creating REST APIs. npm install express --save
Create a file called index.js using your editor of choice. I used nano. nano index.js
Paste the following:
// import the express module and create the express appconst express =require('express');const app =express();// install middleware that can encode the payloadapp.use(express.urlencoded({extended:false})); // create an array to hold the environmental dataconst data = []; // End points for the web service//list entriesapp.get('/api/environment/entries', (req,res) => { res.send(data); //Just send at empty array for now} );// create a web server, running on your port of choice or 3000const port = process.env.PORT||3000;app.listen(port,() => { console.log(Listening on port ${port});} );
This server will respond to HTTP GET requests at the /api/environment/entries endpoint listening on port 3000.
Start the node server node index.js
Open your browser and go to http://raspberrypi:3000/api/environment/entries
The result will not be very exciting as you will just see an empty array returned in the browser. However, give yourself a pat on the back. You have created your first fledgling web service!
Today’s article is a bit of fun. We are looking at the four levels of styling a simple logon dialog. These UI components are pretty ubiquitous. Here are a few examples of HTML logon dialogs
Logon, logon everywhere.
Level 1 – basic styling
They all share the same elements, at the minimum two input boxes, an OK button and usually a cancel button. Sometimes there are labels next to each input box. Other times placeholder text is shown in the input controls which gets replaced with whatever you type into the input control. In addition there is often a link to reset the password should it be forgotten.
With this in mind our logon dialog will have two input controls with placeholder text, a single button and a link to reset a forgotten password. Let’s get to work. Our level 1 logon dialog will be vanilla. Some HTML and very little CSS. The CSS is there just to layout the control on the page. Here is our basic HTML.
<htmllang="en"> <head> <metacharset="utf-8"> <title>Login</title> <metaname="description"content="Login to your account"> <metaname="author"content="Jonathan"> <linkrel="stylesheet"href="css/styles.css?v=1.0"> </head> <body> <divclass="container"> <divclass="log-form"> <h2>Login to your account</h2> <form> <inputtype="text"title="username"placeholder="username" /> <inputtype="password"title="password"placeholder="password" /> <buttontype="submit"class="btn">Login</button> <aclass="forgot"href="#">Forgot Username?</a> </form> </div> </div> </body></html>
Nothing out of the ordinary here. We have defined a container div to allow it to be centred on the page. Inside this div we have another div which holds a standard form element. The form element has two input tags, a button and an anchor. All nice and simple. Let’s now take a look at the CSS
The container div is styled to fit the whole browser and uses the flex layout to easily centre the logon dialog vertically and horizontally. Note the height and width need to be 100% for this to work. The rest of the CSS is concerned with adding padding and margins to space out the elements in the dialog. This is the result.
Level 1 logon dialog with minimal styling
Yes it is functional but does it look good? Not really. I’d score it a C and that’s pushing it. The next thing to do is ‘style it up’. Add colours and make it pop. The first thing to do is choose a colour scheme. It’s Summer here in the UK at the moment and the sun is out so I’m thinking orange…let’s get going onto level 2.
This is much better. We have a bright background with the dialog centred as before. The dialog has been lifted by introducing a shadow effect around it. The clunky inputs have been styled with a single clean line and the text has been given a deep orange colour to complement the background. The vanilla old skool button has been replaced with a solid orange rectangle with white text. This is all good but look what happens when we navigate around the dialog.
The default blue focus outlines do not look right with the new dialog theme. Luckily we can do something about that.
Level 3 – add some finesse
While the level 2 version of our dialog looked good, but in use the default browser behaviour let it down. So for the next level we are going to provide custom styling to handle the form interactivity. At the same time we are going to add some pleasing animation to really finesse our dialog.
The CSS focus, active and hover pseudo selectors have been added along with animated transitions when moving between state. To highlight the button when it has focus I have added a box-shadow to act as a ‘focus ring’ around the button. The link now adds an underline style when it has focus. Here is the complete CSS.
I though it would be cool to have a bouncing effect when the page is loaded. So the dialog jumps up and bounces ‘down’ onto the screen. This is fairly straightforward to do using animation keyframes.
The animation is referenced in the log-form class by supplying the name and duration. This tells the animation what element it is going to animate.
Here the animation is called bounce, the timing function allows you to define how the animation moves. Details here. Next you define the animation using the @keyframes keyword in your CSS file.
The @keyframes keyword defines to do and when to do it. So in the example above at 10% we are squatting down to jump by making the dialog shorter and fatter. At 30% the dialog jumps by springing up. In doing so it becomes thinner and taller. At halfway we land on the ground before doing a little bounce after landing. This makes more sense seen in slow motion followed by the faster version.
The transform property allows you to rotate, scale, skew or translate(move) an element. In this case we are using a combination of scale and translate.
This data structure is an important one to know. A Binary Search Tree allows you to maintain a data set in sorted order. This in turn allows you efficiently locate data items
This data structure is an important one to know. A Binary Search Tree allows you to maintain a data set in sorted order. This in turn allows you efficiently locate data items. If you would were to use a standard array you would need to sort it every time you add a new data item. Not so with the BST. So with this in mind it must be pretty tricky to implement one, yes? It’s actually quite straightforward. Let us dive in.
Each node in the BST holds some data which can be compared. This is important because it is this that enables data to be chunked or grouped based on the notion that one node is greater than the other.
A simple Binary Search Tree
Our data set looks like this 5,2,6,9,1,3,4.
In the example above 5 is the first item to be added. There is nothing in the tree to start with so the root contains 5. Next up in our list is 2. We would examine the first(root) node and ask ourselves this question “Is our new value greater or less than the current node?”. Here 2 is less than 5 so it is placed in the left-hand side of the root node. Next up is 6. Again we would look at the first node(the root) and determine that 6 is greater than 5 and therefore place it in the right-hand side of the root node. Now our root node has two children 2 and 6. Now if we want to add the value 9 we look at the root, 9 is greater than 5 so we will place it on the right-hand side. However we can’t do this because it already has the value 6. Now we look at node containing 6. 9 is greater than 6 so 9 is placed in the right-hand side of the node that contains 6, and so on…
Using this information we can define a Java class to represent a node in a BST.
publicclassBSTNode {publicint data;public left;public right;publicBSTNode(intvalue) { data = value; left =null; right =null; }}
The BSTNode class is very simple. It has three public members. One to hold the data, in this case to keep things easy it is just an integer number for easy comparison. The other two hold the left and right child nodes. When the class is constructed the data value is passed in and set. Now to create the tree class itself and a method to add a value to the tree.
publicclassBinarySearchTree {publicBSTNode root;publicBinarySearchTree() { root =null; } /** * Add a new node * @paramvalue The node value * @return The new node */publicBSTNodeadd(intvalue) {// Create the new nodeBSTNode newNode =newBSTNode(value);// If there is no root then create and returnif (this.root ==null) {this.root = newNode;returnthis.root; }//Recurse through the tree and find the one where the data should be setBSTNode node =nextNode(this.root, value);if (value < node.data) { node.left = newNode; }else { node.right = newNode; }return newNode; } /** * Get the next node that has does not have children * @paramnode The current node * @paramvalue The value to be compared against the node's data value * @return The node */privateBSTNodenextNode(BSTNodenode, intvalue) {boolean leftNode = value < node.data;if (leftNode && node.left !=null) {returnnextNode(node.left, value); } elseif (!leftNode && node.right !=null) {returnnextNode(node.right, value); }return node; }}
In order to effectively traverse the tree I have implemented a recursive function which drills down into each child comparing the value as it goes and when it find the last correct node that does not have left/right child node it returns. The calling function then sets the correct class member left or right depending on the value.
Let’s test our BST. We are going to add a unit test to ensure that the tree has been built correctly.