Skip to content

Example: Text Classification Agent

Build a simple text classification agent that categorizes movie reviews.

Overview

In this example, you'll create an agent that: - Classifies movie reviews as positive or negative - Learns from 200 labeled examples - Achieves ~90% accuracy - Makes predictions on new reviews

Project Structure

movie-classifier/
├── config.yaml
├── data/
│   ├── reviews.csv
│   ├── train.csv
│   └── test.csv
├── models/
│   └── classifier.pkl
└── README.md

Step 1: Create Agent

iovalence create-agent --name movie-classifier --type classification
cd movie-classifier

Step 2: Prepare Data

Create data/reviews.csv with movie reviews:

text,sentiment
"Amazing film! Best movie ever!",positive
"Absolute garbage. Total waste of time.",negative
"Great plot and amazing actors",positive
"Boring and predictable",negative
"This movie was phenomenal!",positive

Minimum 100-200 examples for good results.

Step 3: Configure Agent

Edit config.yaml:

agent:
  name: movie-classifier
  type: classification
  framework: tensorflow

data:
  train_file: data/train.csv
  test_file: data/test.csv
  features: [text]
  target: sentiment
  preprocessing:
    lowercase: true
    remove_special_chars: false

training:
  epochs: 20
  batch_size: 32
  validation_split: 0.2
  learning_rate: 0.001
  early_stopping: true
  patience: 5

Step 4: Split Data

python split_data.py

Script:

import pandas as pd
from sklearn.model_selection import train_test_split

df = pd.read_csv('data/reviews.csv')
train, test = train_test_split(df, test_size=0.2)

train.to_csv('data/train.csv', index=False)
test.to_csv('data/test.csv', index=False)

Step 5: Train

iovalence train --config config.yaml

Expected output:

Epoch 1/20: loss=0.456, acc=0.78, val_loss=0.421, val_acc=0.82
Epoch 2/20: loss=0.312, acc=0.85, val_loss=0.298, val_acc=0.88
...
Epoch 20/20: loss=0.089, acc=0.94, val_loss=0.095, val_acc=0.91
Training complete! Best accuracy: 91%

Step 6: Evaluate

iovalence evaluate --model models/classifier.pkl --test data/test.csv

Output:

Accuracy:  0.91
Precision: 0.89
Recall:    0.93
F1-Score:  0.91

Step 7: Make Predictions

iovalence predict \
  --model models/classifier.pkl \
  --input "This movie was absolutely incredible!"

Output:

{
  "input": "This movie was absolutely incredible!",
  "prediction": "positive",
  "confidence": 0.97,
  "scores": {
    "positive": 0.97,
    "negative": 0.03
  }
}

Step 8: Batch Predictions

iovalence predict \
  --model models/classifier.pkl \
  --input-file new_reviews.csv \
  --output predictions.csv

Results Analysis

What Works Well

  • Simple positive/negative classification
  • Good with clear language
  • Handles variations of same sentiment

Where It Struggles

  • Sarcasm detection
  • Mixed sentiments
  • Domain-specific language

Improvements

# Try for better accuracy:
training:
  epochs: 50              # More training
  batch_size: 16          # Smaller batches
  learning_rate: 0.0005   # Lower learning rate
  layers: [256, 128]      # Larger model

Deploy Your Classifier

iovalence deploy --model models/classifier.pkl --target docker
docker run -p 5000:5000 movie-classifier:latest

# Test with curl
curl -X POST http://localhost:5000/predict \
  -H "Content-Type: application/json" \
  -d '{"text": "Amazing movie!"}'

Next Steps


Back to Examples →