Analyzes images to extract information, answer questions, or perform visual understanding tasks.
POST
/v1/vision/analyze
Request Body
Les paramètres suivants peuvent être inclus dans le corps de la requête :
Paramètres
model
string
Required
Default Value:
alphaedge-vision-3-1-2505
ID of the vision model to use.
image
string
Required
Base64 encoded image or image URL.
prompt
string
Required
The text prompt describing what to analyze in the image.
Successful Response
Les champs suivants sont retournés dans une réponse réussie :
Champs de réponse
id
string
Required
A unique identifier for the analysis.
object
string
Required
The object type, which is always "vision.analysis".
model
string
Required
The vision model used.
analysis
string
Required
The analysis result.
created
integer
Required
The Unix timestamp when the analysis was created.
Examples
Exemples de code pour utiliser cet endpoint :
typescript
import { AlphaEdge } from '@alphaedge/alphaedge';
const alphaedge = new AlphaEdge({
apiKey: process.env.ALPHAEDGE_API_KEY,
});
const result = await alphaedge.vision.analyze({
model: 'alphaedge-vision-3-1-2505',
image: 'data:image/png;base64,...',
prompt: 'What is in this image?'
});
python
from alphaedge import AlphaEdge
alphaedge = AlphaEdge(api_key="your-api-key")
result = alphaedge.vision.analyze(
model="alphaedge-vision-3-1-2505",
image="data:image/png;base64,...",
prompt="What is in this image?"
)
curl
curl https://api.alphaedge-ai.com/v1/vision/analyze \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ALPHAEDGE_API_KEY" \
-d '{
"model": "alphaedge-vision-3-1-2505",
"image": "data:image/png;base64,...",
"prompt": "What is in this image?"
}'
Response
Exemple de réponse de l'API :
json
{
"id": "vision-abc123",
"object": "vision.analysis",
"model": "alphaedge-vision-3-1-2505",
"analysis": "This image shows a beautiful sunset over the ocean.",
"created": 1234567890
}