Wednesday, July 16, 2025

A Simple Agent to summarize web content using Embabel

AI (Artificial Intelligence) is everywhere we go a

AI (Artificial Intelligence) is everywhere we go and definitely it is here to stay.

A great use of AI for us as Software Developers is the creation of Intelligent Agents that, with the help of Large Language Models (LLMs), can solve problems that would be complex or impossible to address through traditional programming.

A few weeks ago I learned about the existence of a new framework that Rod Johnson (creator of the Spring framework) and other people are working on called Embabel

Embabel is a framework for creating agent workflows in the JVM by mixing interactions with LLMs via prompts and code with domain models (Java/Kotlin classes). The framework is built on top of Spring AI

The framework is relatively new and still in development, there is no official documentation yet and it’s possible that some things I explain here will change in the future (although I don’t think it will change radically).

The code for this example can be found in my Github repository at: abadongutierrez/basic-embabel-agent

Use Case: Web content summarization

Almost all of us have used LLMs to summarize some text. In fact, summarization is one of the great uses of LLMs, and in today’s example we’ll use Embabel to create an Agent that extracts content from websites we indicate and summarizes their content.

In general, we’ll use Embabel to build an agent that:

  1. Receives free text input from the user (via Spring Shell).
  2. Extracts web links mentioned by the user (with the help of LLM’s)
  3. Visits each site, obtains its content in the form of free text without HTML tags (using Tools).
  4. Generates a summary of each site’s content (again, using a LLM).

To visit each link and extract the content from that website, we’ll use the JSoup library. With this library we can easily connect to a website and extract only the text without HTML tags as follows:

// Connect and get the HTML document
Document doc = Jsoup.connect("https://en.wikipedia.org/").get();
// get only the text (no HTML tags)
doc.text();

How the Agent is created?

To define an Agent we need to create a class annotated with @Agent. This is very similar to using @Component and the derived annotations that exist in the Spring Framework. In fact, @Agent also derives from @Component so it’s managed as a Bean and, therefore, we can take advantage of dependency injection.

@Agent(description = "Agent to summarize content of web pages")  
public class SummarizingAgent {
    @Action  
    public WebPageLinks extractWebPagesLinks(UserInput userInput) { ... }

    @Action
    public SummarizedPages summarizeWebPages(WebPageLinks webPageLinks, OperationContext operationContext) { ... }

    @AchievesGoal(description = "Show summarized content of the web pages to the user")  
    @Action  
    public SummarizedPages showSummarization(SummarizedPages summarizedPages) { ... }
}

It’s important to assign a good description to the agent, since when interacting with them through Spring Shell, Embabel uses an LLM to select which agent will respond to the user’s request. This selection is based on an analysis of the user’s intention and correspondence with the most suitable agent to handle it.

Each method that represents a step in the agent’s flow must be annotated with @Action. The method that represents the agent’s final goal is also annotated with @AchievesGoal.

When we interact with Agents via the Spring Shell interface, generally the first step is the @Action method that receives a UserInput as an argument. I mention this because using Agents via Spring Shell is not the only way to interact with them - you can also use other mechanisms that I’ll try to explore in future posts.

Agent Flow

There is no way to specify the Agent flow programmatically. The framework, as stated on the homepage, tries to go beyond simply specifying a flow through a state machine and applies intelligent planning at the beginning of the flow and after the execution of each step. The framework detects the flow through the relationship between methods by inspecting the data types in “inputs” (method arguments) and “outputs” (return type).

SummarizingAgent

In this tutorial we create the SummarizingAgent which as a first step extracts URLs from user input. To achieve this we use an LLM because since user instructions are free text without format, LLMs are good at analyzing text and extracting information that we indicate. This is implemented in the extractWebPagesLinks method.

@Action  
public WebPageLinks extractWebPagesLinks(UserInput userInput) {  
    String prompt = String.format("""  
            Extracts the urls from the provided user input.

            <user-input>  
            %s
            </user-input>

            Extract only the links mentioned in the user input, dont add any other links.  
            """.trim(), userInput.getContent());  
    return PromptRunner.usingLlm().createObjectIfPossible(prompt, WebPageLinks.class);  
}

The second step in the flow is to extract the text content from each website and here we again rely on an LLM to obtain a summary of that content. This is implemented in the summarizeWebPages method. This method has 2 ways of acting and this depends on the app.useOpenAI flag defined in application.properties. This small application is designed to use Ollama and local models llama3.2 and all-minilm but you can also use OpenAI by setting the app.useOpenAI flag to true, which means you need to specify the OPENAI_API_KEY environment variable for the application to work correctly.

I implemented the summarizeWebPages method in 2 ways because llama3.2 is not as powerful a model as OpenAI models and I had many issues using the same prompt. So when using llama3.2 I used a different prompt and also an alternative in case the first prompt failed.

...
@Value("${app.useOpenAI:false}") boolean useOpenAI

...

@Action  
public SummarizedPages summarizeWebPages(WebPageLinks webPageLinks, OperationContext operationContext) {  
    if (this.useOpenAI) {  
        return getSummarizedPagesUsingOpenAI(webPageLinks);  
    }  
    return getSummarizedPagesUsingLocalModels(webPageLinks, operationContext);  
}

The last step in the flow is simply to return the set of pages and their summary so that the framework prints it to the Spring Shell console. This is implemented in the showSummarization method which must also be marked with the @AchievesGoal annotation because once this method is executed, the agent’s goal will have been achieved.

@AchievesGoal(description = "Show summarized content of the web pages to the user")  
@Action  
public SummarizedPages showSummarization(SummarizedPages summarizedPages) {  
    return summarizedPages;  
}

Interaction with LLMs

The framework has the concept of PromptRunners which, as their name indicates, execute a prompt to an LLM.

A PromptRunner has methods to execute a prompt and convert the prompt output to a domain object with methods like createObjectIfPossible or createObject. This gives the advantage of applying strong typing in our programs and thus being able to use refactoring techniques more easily.

The framework defines certain LLMs that programs will use by default. In our case we’re using local models with Ollama so in the application.properties file we can find the following properties that indicate which models will be used by default:

embabel.models.default-llm=llama3.2:latest  
embabel.models.default-embedding-model=all-minilm:latest  
embabel.models.embedding-services.best=all-minilm:latest  
embabel.models.embedding-services.cheapest=all-minilm:latest  
embabel.models.llms.best=llama3.2:latest  
embabel.models.llms.cheapest=llama3.2:latest  

embabel.agent-platform.ranking.llm=llama3.2:latest

When performing operations with LLMs, one of the main things that must be specified are the prompts, but PromptRunners also provide the facility to specify the “Tools” we want to use as part of executing a prompt. In this small application we are specifying and using JSoup as a “Tool” to extract text from a website.

“Tools” can be specified using the Spring AI annotation @Tool and this can be seen implemented in the JSoupTool class which is also a Spring Bean that we easily inject into the Agent.

@Component  
public class JSoupTool {
    ...

    @Tool(name = "jsoup", description = "A tool to extract text from web pages using JSoup")  
    public String getPageTextTool(String url) {  
        ...
    }

PromptRunners use the LLMs specified by default or using LlmOptions you can specify different models. In the case of this application we use this functionality to specify the OpenAI model to use when the app.useOpenAI flag is active.

String prompt = " ... ";
BuildableLlmOptions llmOptions = LlmOptions.fromCriteria(  
        ModelSelectionCriteria.byName("gpt-4.1-mini")  
);  
return PromptRunner  
        .usingLlm(llmOptions)  
        .withToolObject(jSoupTool)  
        .createObjectIfPossible(prompt, SummarizedPages.class);

Execution of the program

This application is configured to run using Spring Shell as an interface and we can notice this by the @EnableAgentShell annotation in the class where the main method is located.

@SpringBootApplication  
@EnableAgents(  
       loggingTheme = LoggingThemes.STAR_WARS,  
       localModels = {LocalModels.OLLAMA}  
)  
@EnableAgentShell
public class BasicEmbabelAgentApplication {  
    public static void main(String[] args) {  
       SpringApplication.run(BasicEmbabelAgentApplication.class, args);  
    }  
}

Also as you can notice it’s configured to search for and use local LLMs using Ollama.

Once the application is executed, the Spring Shell prompt appears where we can use the x (execute) command to indicate the “user input” and make the Embabel agent platform search for and select the appropriate agent to handle the user’s request:

...
21:05:55.957 [main] INFO  DelegatingAgentScanningBeanPostProcessor - All deferred beans were post-processed.
21:05:55.958 [main] INFO  BasicEmbabelAgentApplication - Started BasicEmbabelAgentApplication in 1.834 seconds (process running for 2.074)
Fear is the path to the dark side.
starwars> x "summarize the content of the following page https://en.wikipedia.org/wiki/Alan_Turing"

output:

You asked: UserInput(content=summarize the content of the following page https://en.wikipedia.org/wiki/Alan_Turing, timestamp=2025-07-08T03:15:04.089557Z)

{
  "summarizedPages" : [ {
    "url" : "https://en.wikipedia.org/wiki/Alan_Turing",
    "summary" : "Alan Turing (1912-1954) was a British mathematician, computer scientist, logician, philosopher, and cryptographer who made significant contributions to the development of computer science, artificial intelligence, and cryptography.\n\n**Early Life and Education**\n\nTuring was born on June 23, 1912, in London, England. He studied mathematics at King's College, Cambridge, where he graduated with a First Class Honours degree in Mathematics. During World War II, Turing worked at the Government Code and Cypher School (GC&CS) at Bletchley Park, where he played a crucial role in cracking the German Enigma code.\n\n**Contributions to Computing**\n\nTuring is considered one of the founders of computer science. He proposed the theoretical foundations of modern computer science, including:\n\n1. **The Turing Machine**: a mathematical model for a computer's central processing unit (CPU).\n2. **The Universal Turing Machine**: a machine that could simulate any other machine.\n3. **Computability Theory**: the study of what can be computed by a machine.\n\n**Codebreaking and Cryptography**\n\nAt Bletchley Park, Turing worked with a team to crack the Enigma code, which was used by the German military during World War II. His work significantly contributed to the Allied victory.\n\n**Personal Life and Later Years**\n\nTuring's personal life was marked by tragedy. In 1952, he was convicted of gross indecency for his relationship with a man, which led to his chemical castration and eventual death in 1954 at the age of 41.\n\n**Legacy**\n\nTuring's legacy is profound:\n\n1. **Computer Science**: Turing's work laid the foundation for modern computer science.\n2. **Artificial Intelligence**: His ideas on machine intelligence and computation have influenced AI research.\n3. **Cryptography**: Turing's contributions to codebreaking and cryptography have had a lasting impact on national security.\n\n**Recognition**\n\nIn 2009, the British government officially apologized for Turing's treatment and posthumously pardoned him. In 2017, he was featured on the £50 note, making him the first openly gay person to be featured on a British banknote.\n\nTuring's life and work serve as a testament to his innovative spirit and contributions to science and society. His legacy continues to inspire new generations of computer scientists, mathematicians, and thinkers."
  } ]
}

If you carefully look at the “logs” that the application prints when executing, you’ll notice the steps that Embabel takes to select the Agent to execute, the actions/goals that the Agent contains, and the planning it does for executing the “actions” after the execution of each step.

Conclusion

Although Embabel is still in a development stage, it already demonstrates being a promising proposal for developers working on the JVM. Embabel is developed in Kotlin but, as Rod Johnson has mentioned in interviews, it should be able to be used naturally in Java as can be seen in the code of this example.

Its declarative approach allows creating intelligent agents using annotations, without defining flows explicitly. Instead, an AI algorithm (without using LLMs) infers the execution plan according to the agent’s context and after executing each step. Additionally, it integrates natively with known technologies like Spring and Spring AI, which facilitates its adoption. It also includes support for unit and integration testing, making it suitable for serious projects from the start.

Monday, July 7, 2025

Un Agente simple para realizar resumen del contenido de sitios web con Embabel

La AI (Artificial Intelligence) esta por todos lad

La AI (Artificial Intelligence) esta por todos lados y ha llego para quedarse.

Un gran uso de la AI para nosotros como Desarrolladores de Software es la creación de Agentes Inteligentes que, con ayuda de Large Language Models (LLM’s), puedan resolver problemas que serían complejos o imposibles de abordar mediante programación tradicional.

Hace unas semanas me entere de la existencia de un nuevo framework en el que Rod Johnson (creador del framework Spring) y otras personas están trabajando llamado Embabel

Embabel es un framework para crear flujos de agentes en la JVM haciendo una mezcla de interacciones con LLM’s via prompts y código con modelos de dominio (clases Java/Kotlin). El framework esta construido sobre Spring AI

El framework es relativamente nuevo y aun en desarrollo, aun no existe documentación oficial y es posible que algunas cosas que aquí explico cambien en un futuro (aunque no creo que radicalmente).

El código de este ejemplo lo encuentran en mi repositorio de Github en la dirección: abadongutierrez/basic-embabel-agent

Caso de Uso: Resumen de sitios web

Casi todos hemos usado LLM’s para realizar algún resumen de algún texto, de hecho, hacer resumen es uno de los grandes usos de LLM’s, y en el ejemplo de hoy usaremos Embabel para crear un Agente que extraiga el contenido de los sitios que le digamos y que haga un resumen del texto de los mismos.

En general usaremos Embabel para construir un agente que: 1. Reciba una entrada de texto libre por parte del usuario (via Spring Shell). 2. Extraiga los enlaces web mencionados por el usuario. 3. Visite cada sitio, obtenga su contenido en forma de texto libre de etiquetas HTML. 4. Genere un resumen del contenido de cada sitio.

Para visitar cada liga y extraer el contenido de ese sitio web usaremos la biblioteca JSoup. Con esta biblioteca podemos fácilmente conectarnos a un sitio web y extraer solo el texto sin etiquetas HTML de la siguiente forma:

// Conectarse y obtener el documento HTML
Document doc = Jsoup.connect("https://en.wikipedia.org/").get();
// Extraer solo el texto
doc.text();

¿Cómo se crea un Agente?

Para definir un Agente tenemos que crear una clase y anotarla con @Agent. Esto es muy similar al uso @Component y las anotaciones derivadas que existen en el Framework de Spring. De hecho @Agent también deriva de @Component por lo que se maneja como un Bean y, por lo mismo, podemos aprovechar la inyección de dependencias.

@Agent(description = "Agent to summarize content of web pages")  
public class SummarizingAgent {
    @Action  
    public WebPageLinks extractWebPagesLinks(UserInput userInput) { ... }

    @Action
    public SummarizedPages summarizeWebPages(WebPageLinks webPageLinks, OperationContext operationContext) { ... }

    @AchievesGoal(description = "Show summarized content of the web pages to the user")  
    @Action  
    public SummarizedPages showSummarization(SummarizedPages summarizedPages) { ... }
}

Es importante asignar una buena descripción al agente, ya que cuando se interactúa con ellos a través de Spring Shell, un LLM es el encargado de seleccionar qué agente responderá a la solicitud del usuario. Esta selección se basa en un análisis de la intención del usuario y en la correspondencia con el agente más adecuado para atenderla.

Cada método que represente un paso en el flujo del agente debe anotarse con @Action. El método que representa el objetivo final del agente también se anota con @AchievesGoal.

Cuando interactuamos con Agentes via la interfaz de Spring Shell, generalmente el primer paso es el método @Action que recibe como argumento un UserInput. Esto lo menciono porque usar los Agentes via Spring Shell no es la única forma de interactuar con ellos también se puede con otros mecanismos que tratare de explorar en futuras publicaciones.

El flujo del Agente

No existe una forma de especificar el flujo del Agente programáticamente. El framework, como lo dice en la pagina de inicio, trata de ir mas allá de simplemente especificar un flujo a través de una maquina de estados y aplica una planeación inteligente al inicio del flujo y después de la ejecución de cada paso. El flujo lo detecta el framework a través de la relación que hay entre los métodos inspeccionando los tipos de datos en de “entradas” (argumentos de los métodos) y “salidas” (el tipo de retorno).

SummarizingAgent

En este tutorial creamos el SummarizingAgent que como primer paso extrae las URLs de la entrada de usuario. Para lograr esto hacemos uso de un LLM debido a que como las instrucciones del usuario es texto libre y sin formato, los LLM son buenos analizando texto y extrayendo información que nosotros le indiquemos. Esto esta implementado en el método extractWebPagesLinks.

@Action  
public WebPageLinks extractWebPagesLinks(UserInput userInput) {  
    String prompt = String.format("""  
            Extracts the urls from the provided user input.

            <user-input>  
            %s
            </user-input>

            Extract only the links mentioned in the user input, dont add any other links.  
            """.trim(), userInput.getContent());  
    return PromptRunner.usingLlm().createObjectIfPossible(prompt, WebPageLinks.class);  
}

El segundo paso en el flujo es extraer el contenido en texto de cada sitio web y aquí nos apoyarnos nuevamente de un LLM para obtener un resumen de ese contenido. Esto esta implementado en el método summarizeWebPages. Este método tiene 2 formas de actuar y esto depende de la bandera app.useOpenAI definida en el application.properties. Esta pequeña aplicación esta pensada para usar Ollama y los modelos locales llama3.2 y all-minilm pero también se puede hacer uso de OpenAI asignando la bandera app.useOpenAI en true, esto implica que se necesita especificar la variable de ambiente OPENAI_API_KEY para que la aplicación funcione correctamente.

El método summarizeWebPages lo implemente de 2 formas debido a que llama3.2 no es un modelo tan poderoso como los modelos de OpenAI y tuve muchas complicaciones usando el mismo prompt. Por lo que para cuando se usa llama3.2 use un prompt distinto y ademas una alternativa en caso de que el primer prompt fallara.

...
@Value("${app.useOpenAI:false}") boolean useOpenAI

...

@Action  
public SummarizedPages summarizeWebPages(WebPageLinks webPageLinks, OperationContext operationContext) {  
    if (this.useOpenAI) {  
        return getSummarizedPagesUsingOpenAI(webPageLinks);  
    }  
    return getSummarizedPagesUsingLocalModels(webPageLinks, operationContext);  
}

El ultimo paso en el flujo es simplemente devolver el conjunta de paginas y su resumen para que el framework lo imprima en la consola de Spring Shell. Esto implementado en el método showSummarization que ademas tiene que estar marcado con la anotación @AchievesGoal debido a que una vez ejecutando este método el objetivo del agente se habra logrado.

@AchievesGoal(description = "Show summarized content of the web pages to the user")  
@Action  
public SummarizedPages showSummarization(SummarizedPages summarizedPages) {  
    return summarizedPages;  
}

Interacción con LLMs

El framework tiene el concepto de PromptRunner‘s que como su nombre lo indica ejecutan un prompt a un LLM.

Un PromptRunner tiene métodos para ejecutar un prompt y convertir la salida del prompt a un objeto de dominio con métodos como createObjectIfPossible o createObject. Esto da la ventaja de aplicar tipado fuerte en nuestros programas y así poder hacer uso de técnicas de refactoring mas fácilmente.

El framework define ciertos LLM que los programas utilizaran por default. En nuestro caso estamos usando modelos locales con Ollama por lo que en el archivo application.properties podemos encontrar las siguientes propiedades que indican que modelos se estarán usando por default:

embabel.models.default-llm=llama3.2:latest  
embabel.models.default-embedding-model=all-minilm:latest  
embabel.models.embedding-services.best=all-minilm:latest  
embabel.models.embedding-services.cheapest=all-minilm:latest  
embabel.models.llms.best=llama3.2:latest  
embabel.models.llms.cheapest=llama3.2:latest  

embabel.agent-platform.ranking.llm=llama3.2:latest

Cuando se realizan operaciones con LLM una de las principales cosas que se deben especificar son los prompts pero los PromptRunner‘s también dan la facilidad de especificar los “Tools” que deseamos utilizar como parte de la ejecución de un prompt. En esta pequeña aplicación estamos especificando y haciendo use de JSoup como “Tool” para extraer el texto de un sitio web.

Las “Tools” se pueden especificar haciendo uso de la anotación de Spring AI @Tool y esto se puede ver implemetado en la clase JSoupTool que ademas es un Bean de spring que fácilmente inyectamos al Agente.

@Component  
public class JSoupTool {
    ...

    @Tool(name = "jsoup", description = "A tool to extract text from web pages using JSoup")  
    public String getPageTextTool(String url) {  
        ...
    }

Los PromptRunner‘s hacen uso de los LLM especificados por default o usando LlmOptions se puede indicar modelos distintos. En el caso de esta aplicación usamos esta funcionalidad para especificar el modelo de OpenAI a utilizar cuando la bandera app.useOpenAI esta activa.

String prompt = " ... ";
BuildableLlmOptions llmOptions = LlmOptions.fromCriteria(  
        ModelSelectionCriteria.byName("gpt-4.1-mini")  
);  
return PromptRunner  
        .usingLlm(llmOptions)  
        .withToolObject(jSoupTool)  
        .createObjectIfPossible(prompt, SummarizedPages.class);

Ejecución

Esta aplicación esta configurada para correr usando como interfaz Spring Shell y eso lo podemos notar por la anotación @EnableAgentShell en la clase donde se encuentra el método main.

@SpringBootApplication  
@EnableAgents(  
       loggingTheme = LoggingThemes.STAR_WARS,  
       localModels = {LocalModels.OLLAMA}  
)  
@EnableAgentShell
public class BasicEmbabelAgentApplication {  
    public static void main(String[] args) {  
       SpringApplication.run(BasicEmbabelAgentApplication.class, args);  
    }  
}

También como se puede notar esta configurada para buscar y usar LLM’s locales usando Ollama.

Una vez ejecutada la aplicación aparece el prompt de Spring Shell donde podemos usar el comando x (execute) para indicar el “user input” y hacer que la plataforma de agentes de Embabel busque y seleccione el agente adecuado para atender la petición del usuario:

...
21:05:55.957 [main] INFO  DelegatingAgentScanningBeanPostProcessor - All deferred beans were post-processed.
21:05:55.958 [main] INFO  BasicEmbabelAgentApplication - Started BasicEmbabelAgentApplication in 1.834 seconds (process running for 2.074)
Fear is the path to the dark side.
starwars> x "summarize the content of the following page https://en.wikipedia.org/wiki/Alan_Turing"

salida:

You asked: UserInput(content=summarize the content of the following page https://en.wikipedia.org/wiki/Alan_Turing, timestamp=2025-07-08T03:15:04.089557Z)

{
  "summarizedPages" : [ {
    "url" : "https://en.wikipedia.org/wiki/Alan_Turing",
    "summary" : "Alan Turing (1912-1954) was a British mathematician, computer scientist, logician, philosopher, and cryptographer who made significant contributions to the development of computer science, artificial intelligence, and cryptography.\n\n**Early Life and Education**\n\nTuring was born on June 23, 1912, in London, England. He studied mathematics at King's College, Cambridge, where he graduated with a First Class Honours degree in Mathematics. During World War II, Turing worked at the Government Code and Cypher School (GC&CS) at Bletchley Park, where he played a crucial role in cracking the German Enigma code.\n\n**Contributions to Computing**\n\nTuring is considered one of the founders of computer science. He proposed the theoretical foundations of modern computer science, including:\n\n1. **The Turing Machine**: a mathematical model for a computer's central processing unit (CPU).\n2. **The Universal Turing Machine**: a machine that could simulate any other machine.\n3. **Computability Theory**: the study of what can be computed by a machine.\n\n**Codebreaking and Cryptography**\n\nAt Bletchley Park, Turing worked with a team to crack the Enigma code, which was used by the German military during World War II. His work significantly contributed to the Allied victory.\n\n**Personal Life and Later Years**\n\nTuring's personal life was marked by tragedy. In 1952, he was convicted of gross indecency for his relationship with a man, which led to his chemical castration and eventual death in 1954 at the age of 41.\n\n**Legacy**\n\nTuring's legacy is profound:\n\n1. **Computer Science**: Turing's work laid the foundation for modern computer science.\n2. **Artificial Intelligence**: His ideas on machine intelligence and computation have influenced AI research.\n3. **Cryptography**: Turing's contributions to codebreaking and cryptography have had a lasting impact on national security.\n\n**Recognition**\n\nIn 2009, the British government officially apologized for Turing's treatment and posthumously pardoned him. In 2017, he was featured on the £50 note, making him the first openly gay person to be featured on a British banknote.\n\nTuring's life and work serve as a testament to his innovative spirit and contributions to science and society. His legacy continues to inspire new generations of computer scientists, mathematicians, and thinkers."
  } ]
}

Si analizan con atención los “logs” que imprime la aplicación al ejecutar notaran los pasos que toma Embabel para seleccionar el Agente a ejecutar, los action/goals que contiene el Agente y la planeacion que hace de la ejecución de los “actions” después de la ejecución de cada paso.

Conclusion

Aunque Embabel aún se encuentra en una etapa de desarrollo, ya demuestra ser una propuesta prometedora para los desarrolladores que trabajamos sobre la JVM. Embabel esta desarrollado en Kotlin pero, Rod Johnson lo ha mencionado en entrevistas, este debe poder usarse de manera natural en Java como se puede ver en el código de este ejemplo.

Su enfoque declarativo permite crear agentes inteligentes usando anotaciones, sin definir flujos de forma explícita. En su lugar, un algoritmo de IA (sin usar LLMs) infiere el plan de ejecución según el contexto del agente y después de ejecutar cada paso. Además, se integra de forma nativa con tecnologías conocidas como Spring y Spring AI, lo que facilita su adopción. También incluye soporte para pruebas unitarias y de integración, lo que lo hace apto para proyectos serios desde el inicio.