AiClient
The AiClient
interface streamlines interactions with AI Models.
It simplifies connecting to various AI Models -— each with potentially unique APIs -— by offering a uniform interface for interaction.
Currently, the interface supports only text-based input and output. You should expect some of the classes and interfaces to change as we add other input and output types.
The design of the AiClient
interface centers around two primary goals:
-
Portability: It allows easy integration with different AI Models, letting developers switch between differing AI models with minimal code changes. This design aligns with Spring’s philosophy of modularity and interchangeability.
-
Simplicity: By using companion classes like
Prompt
for input encapsulation andAiResponse
for output handling, theAiClient
interface simplifies communication with AI Models. It manages the complexity of request preparation and response parsing, offering a direct and simplified API interaction.
API Overview
This section provides a guide to the AiClient
interface and associated classes.
AiClient
Here is the AiClient
interface definition:
public interface AiClient {
default String generate(String message) { // implementation omitted
}
AiResponse generate(Prompt prompt);
}
The generate
method with a String
parameter simplifies initial use, avoiding the complexities of the more sophisticated Prompt
and AiResponse
classes.
Prompt
In a real-world application, it is most common to use the generate
method, taking a Prompt
instance and returning an AiResponse
.
The Prompt
class encapsulates a list of Message
objects.
The following listing shows a truncated version of the Prompt class, excluding constructors and other utility methods:
public class Prompt {
private final List<Message> messages;
// constructors and utility methods omitted
}
Message
The Message
interface encapsulates a textual message, a collection of attributes as a Map
, and a categorization known as MessageType
. The interface is defined as follows:
public interface Message {
String getContent();
Map<String, Object> getProperties();
MessageType getMessageType();
}
The Message
interface has various implementations that correspond to the categories of messages that an AI model can process.
Some models, like OpenAI’s chat completion endpoint, distinguish between message categories based on conversational roles, effectively mapped by the MessageType
.
For instance, OpenAI recognizes message categories for distinct conversational roles such as “system,” “user,” or “assistant.”
While the term, MessageType
, might imply a specific message format, in this context, it effectively designates the role a message plays in the dialogue.
For AI models that do not use specific roles, the UserMessage
implementation acts as a standard category, typically representing user-generated inquiries or instructions.
To understand the practical application and the relationship between Prompt
and Message
, especially in the context of these roles or message categories, see the detailed explanations in the [Prompts] section.
AiResponse
The structure of the AiResponse
class is as follows:
public class AiResponse {
private final List<Generation> generations;
// other methods omitted
}
The AiResponse
class holds the AI Model’s output, with each Generation
instance containing one of potentially multiple outputs from a single prompt.
The AiResponse
class also carries a map of key-value pairs providing metadata about the AI Model’s response. This feature is still in progress and is not elaborated on in this document.
Available Implementations
The AiClient
interface has the following available implementations:
-
OpenAI: Using the Theo Kanning client library.
-
Azure OpenAI: Using Microsoft’s OpenAI client library.
-
Hugging Face: Using the Hugging Face Hosted Inference Service. This gives you access to hundreds of models.
-
Ollama: Run large language models locally.
Planned implementations * Amazon Bedrock: This can provide access to many AI models. * Google Vertex: Providing access to 'Bard' (AKA Palm2).
Others are welcome. The list is not at all closed.
OpenAI-Compatible Models
A variety of models compatible with the OpenAI API are available, including those that can be operated locally, such as [LocalAI](github.com/mudler/LocalAI). The standard configuration for connecting to the OpenAI API is through the spring.ai.openai.baseUrl
property, which defaults to api.openai.com
.
To link the OpenAI client to a compatible model that uses the OpenAI API, you should adjust the spring.ai.openai.baseUrl
property to the corresponding URL of the model you wish to connect to.
Configuration
This section describes how to configure models, including:
OpenAI
Add the Spring Boot starter to you project’s dependencies:
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
This makes an instance of the AiClient
that is backed by the Theo Kanning client library available for injection in your application classes.
The Spring AI project defines a configuration property named spring.ai.openai.api-key
that you should set to the value of the API Key
obtained from openai.com
.
Exporting an environment variable is one way to set that configuration property.
export SPRING_AI_OPENAI_API_KEY=<INSERT KEY HERE>
Azure OpenAI
This makes an instance of the AiClient
that is backed by Microsoft’s OpenAI client library available for injection in your application classes.
The Spring AI project defines a configuration property named spring.ai.azure.openai.api-key
that you should set to the value of the API Key
obtained from Azure.
There is also a configuration property named spring.ai.azure.openai.endpoint
that you should set to the endpoint URL obtained when provisioning your model in Azure.
Exporting environment variables is one way to set these configuration properties.
export SPRING_AI_AZURE_OPENAI_API_KEY=<INSERT KEY HERE>
export SPRING_AI_AZURE_OPENAI_ENDPOINT=<INSERT ENDPOINT URL HERE>
Hugging Face
There is not yet a Spring Boot Starter for this client implementation, so you should add the dependency to the HuggingFace client implementation to your project’s dependencies and export an environment variable:
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-huggingface</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
export HUGGINGFACE_API_KEY=your_api_key_here
Obtain the endpoint URL of the inference endpoint. You can find this on the Inference Endpoint’s UI here.
Ollama
There is not yet a Spring Boot Starter for this client implementation, so you should add the dependency to the Ollama client implementation to your project’s dependencies:
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-ollama</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
Example Usage
The following listing shows a simple "Hello, world" example. It uses the AiClient.generate
method that takes a String
as input and returns a String
as output:
@RestController
public class SimpleAiController {
private final AiClient aiClient;
@Autowired
public SimpleAiController(AiClient aiClient) {
this.aiClient = aiClient;
}
@GetMapping("/ai/generate")
public Map generate(@RequestParam(value = "message", defaultValue = "Tell me a joke") String message) {
return Map.of("generation", aiClient.generate(message));
}
}
API Docs
You can find the Javadoc here.
Feedback and Contributions
The project’s GitHub discussions is a great place to send feedback.