home / skills / amnadtaowsoam / cerebraskills / line-platform-integration
This skill helps you integrate AI-powered LINE chatbots for Thai users, enabling rich menus, fast responses, and actionable analytics.
npx playbooks add skill amnadtaowsoam/cerebraskills --skill line-platform-integrationReview the files below or copy the command above to add this skill to your agents.
---
name: LINE Platform Integration
description: Integrating AI-powered chatbots and services with LINE platform for messaging, rich menus, and customer engagement in Thai market.
---
# LINE Platform Integration
## Overview
LINE Platform integration enables AI-powered chatbots and services to leverage LINE's messaging platform, rich menus, and customer engagement features. This is particularly important for the Thai market where LINE is the dominant messaging platform with over 50 million users.
## Why This Matters
- **Reduces Response Time**: Automated instant responses improve customer support efficiency
- **Increases Engagement**: Rich, interactive messaging keeps users engaged with your service
- **Lowers Support Costs**: Automated workflows reduce the need for large support teams
- **Improves Customer Experience**: Consistent, 24/7 availability enhances satisfaction
- **Enables Thai Market Reach**: Access to 50+ million LINE users in Thailand
- **Provides Analytics**: Built-in analytics track user engagement and behavior
---
## Core Concepts
### 1. LINE Platform Overview
#### What is LINE Platform?
LINE is a popular messaging app in Asia, especially in Thailand, Japan, and Taiwan. With over 50 million users in Thailand alone, it's the dominant communication platform.
#### Key Features
- **Messaging**: Send and receive text, image, video, and audio messages
- **Rich Menus**: Custom menu interfaces displayed above chat area
- **Quick Replies**: Suggested response buttons for streamlined interactions
- **Flex Messages**: Rich, interactive message templates
- **Webhooks**: Receive real-time events from LINE
- **Analytics**: Track user engagement and bot performance
#### LINE Bot SDK
Official SDKs available for multiple platforms:
- **Python**: line-bot-sdk
- **Node.js**: @line/bot-sdk
- **Java**: line-bot-sdk-java
### 2. LINE Webhook Setup
#### Webhook Handler (Python)
```python
from flask import Flask, request, abort
from linebot import LineBotApi, WebhookHandler
from linebot.exceptions import InvalidSignatureError
from linebot.models import MessageEvent, TextMessage
app = Flask(__name__)
# Initialize LINE API
line_bot_api = LineBotApi('YOUR_CHANNEL_ACCESS_TOKEN')
handler = WebhookHandler('YOUR_CHANNEL_SECRET')
@app.route("/webhook", methods=['POST'])
def webhook():
# Get request body
body = request.get_data(as_text=True)
signature = request.headers['X-Line-Signature']
# Verify signature
try:
handler.handle(body, signature)
except InvalidSignatureError:
abort(400)
return 'OK'
@handler.add(MessageEvent, message=TextMessage)
def handle_message(event):
# Get user message
user_message = event.message.text
# Generate response
response = generate_response(user_message)
# Send reply
line_bot_api.reply_message(
event.reply_token,
TextMessage(text=response)
)
if __name__ == "__main__":
app.run(port=5000)
```
#### Webhook Handler (Node.js)
```javascript
const express = require('express');
const line = require('@line/bot-sdk');
const { Client } = require('@line/bot-sdk');
const app = express();
// Initialize LINE API
const config = {
channelAccessToken: process.env.CHANNEL_ACCESS_TOKEN,
channelSecret: process.env.CHANNEL_SECRET
};
const client = new Client(config);
// Webhook endpoint
app.post('/webhook', line.middleware(config), (req, res) => {
Promise
.all(req.body.events.map(handleEvent))
.then((result) => res.json(result))
.catch((err) => {
console.error(err);
res.status(500).end();
});
});
// Handle events
async function handleEvent(event) {
if (event.type !== 'message' || event.message.type !== 'text') {
return Promise.resolve(null);
}
const response = generateResponse(event.message.text);
return client.replyMessage(event.replyToken, {
type: 'text',
text: response
});
}
app.listen(3000, () => {
console.log('LINE bot is running on port 3000');
});
```
### 3. LINE Message Types
#### Text Messages
```python
# Simple text message
line_bot_api.push_message(
'USER_ID',
TextMessage(text='Hello, World!')
)
# Multiple text messages
line_bot_api.push_message(
'USER_ID',
[
TextMessage(text='Hello!'),
TextMessage(text='How can I help you?')
]
)
```
#### Image Messages
```python
from linebot.models import ImageMessage
# Send image from URL
line_bot_api.push_message(
'USER_ID',
ImageMessage(
original_content_url='https://example.com/image.jpg',
preview_image_url='https://example.com/image-preview.jpg'
)
)
```
#### Flex Messages
```python
from linebot.models import FlexSendMessage
flex_message = FlexSendMessage(
alt_text='This is a flex message',
contents={
"type": "bubble",
"body": {
"type": "box",
"layout": "horizontal",
"contents": [
{
"type": "text",
"text": "Hello,",
"size": "xl",
"weight": "bold"
},
{
"type": "text",
"text": "World",
"size": "xl",
"weight": "bold",
"color": "#FF0000"
}
]
}
}
)
line_bot_api.push_message('USER_ID', flex_message)
```
### 4. Rich Menus
#### Create Rich Menu
```python
from linebot.models import RichMenu, RichMenuArea, RichMenuBounds, RichMenuSize, URIAction
# Define rich menu
rich_menu = RichMenu(
size=RichMenuSize(width=2500, height=1686),
selected=False,
name="Main Menu",
chatBarText="Tap to open menu",
areas=[
RichMenuArea(
bounds=RichMenuBounds(x=0, y=0, width=1250, height=843),
action=URIAction(label="Website", uri="https://example.com")
),
RichMenuArea(
bounds=RichMenuBounds(x=1250, y=0, width=1250, height=843),
action=URIAction(label="Contact", uri="https://example.com/contact")
)
]
)
# Create rich menu
rich_menu_id = line_bot_api.create_rich_menu(rich_menu)
# Upload rich menu image
with open('rich_menu.png', 'rb') as f:
line_bot_api.set_rich_menu_image(rich_menu_id, 'image/png', f)
# Set as default for all users
line_bot_api.set_default_rich_menu(rich_menu_id)
```
### 5. Quick Replies
```python
from linebot.models import QuickReply, QuickReplyButton, MessageAction
quick_reply = QuickReply(
items=[
QuickReplyButton(
action=MessageAction(label="Yes", text="Yes")
),
QuickReplyButton(
action=MessageAction(label="No", text="No")
),
QuickReplyButton(
action=MessageAction(label="More info", text="More info")
)
]
)
line_bot_api.push_message(
'USER_ID',
TextMessage(text='Do you like this?', quick_reply=quick_reply)
)
```
### 6. LLM Integration
#### OpenAI Integration with LINE
```python
from openai import OpenAI
from linebot.models import TextMessage
# Initialize OpenAI
client = OpenAI(api_key='YOUR_OPENAI_API_KEY')
# Generate response with OpenAI
def generate_response(message: str) -> str:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": message
}
],
temperature=0.7,
max_tokens=500
)
return response.choices[0].message.content
# Handle message with LLM
@handler.add(MessageEvent, message=TextMessage)
def handle_message(event):
response = generate_response(event.message.text)
line_bot_api.reply_message(
event.reply_token,
TextMessage(text=response)
)
```
#### Conversation Memory with Redis
```python
import redis
import json
# Initialize Redis
r = redis.Redis(host='localhost', port=6379, db=0)
# Save message to memory
def save_message(user_id: str, role: str, content: str):
key = f"conversation:{user_id}"
message = {
'role': role,
'content': content,
'timestamp': datetime.now().isoformat()
}
r.lpush(key, json.dumps(message))
r.expire(key, 86400) # Expire after 24 hours
# Get conversation history
def get_conversation(user_id: str) -> list:
key = f"conversation:{user_id}"
messages = r.lrange(key, 0, 9) # Get last 10 messages
return [json.loads(msg) for msg in messages]
# Generate response with memory
def generate_response(user_id: str, message: str) -> str:
history = get_conversation(user_id)
messages = [
{"role": "system", "content": "You are a helpful assistant."}
]
for msg in reversed(history):
messages.append({
"role": msg['role'],
"content": msg['content']
})
messages.append({"role": "user", "content": message})
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
temperature=0.7,
max_tokens=500
)
# Save messages
save_message(user_id, 'user', message)
save_message(user_id, 'assistant', response.choices[0].message.content)
return response.choices[0].message.content
```
### 7. Thai Language Support
```python
# Generate Thai response
def generate_thai_response(message: str) -> str:
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "คุณคือผู้ช่วยที่พูดภาษาไทย ช่วยตอบคำถามของผู้ใช้ด้วยภาษาไทย"
},
{
"role": "user",
"content": message
}
],
temperature=0.7,
max_tokens=500
)
return response.choices[0].message.content
# Example
response = generate_thai_response("สวัสดีครับ")
# Output: "สวัสดีครับ มีอะไรให้ช่วยไหมครับ?"
```
### 8. Analytics and Monitoring
```python
import logging
from datetime import datetime
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Track message
def track_message(user_id: str, message_type: str, content: str):
logger.info({
'timestamp': datetime.now().isoformat(),
'user_id': user_id,
'type': message_type,
'content': content
})
# Track response
def track_response(user_id: str, response: str, latency: float):
logger.info({
'timestamp': datetime.now().isoformat(),
'user_id': user_id,
'response': response,
'latency': latency
})
```
---
## Quick Start
### Minimal LINE Bot Setup
```python
from flask import Flask, request, abort
from linebot import LineBotApi, WebhookHandler
from linebot.exceptions import InvalidSignatureError
from linebot.models import MessageEvent, TextMessage, TextSendMessage
from openai import OpenAI
app = Flask(__name__)
# Initialize LINE API
line_bot_api = LineBotApi('YOUR_CHANNEL_ACCESS_TOKEN')
handler = WebhookHandler('YOUR_CHANNEL_SECRET')
# Initialize OpenAI
client = OpenAI(api_key='YOUR_OPENAI_API_KEY')
@app.route("/webhook", methods=['POST'])
def webhook():
body = request.get_data(as_text=True)
signature = request.headers['X-Line-Signature']
try:
handler.handle(body, signature)
except InvalidSignatureError:
abort(400)
return 'OK'
@handler.add(MessageEvent, message=TextMessage)
def handle_message(event):
# Generate response with OpenAI
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": event.message.text}
]
)
# Send reply
line_bot_api.reply_message(
event.reply_token,
TextSendMessage(text=response.choices[0].message.content)
)
if __name__ == "__main__":
app.run(port=5000)
```
### Installation
```bash
pip install flask line-bot-sdk openai
export CHANNEL_ACCESS_TOKEN="your-channel-access-token"
export CHANNEL_SECRET="your-channel-secret"
export OPENAI_API_KEY="your-openai-api-key"
```
### Next Steps
1. Set up LINE Developer Console and webhook URL
2. Add conversation memory for multi-turn conversations
3. Implement rich menus and quick replies
4. Set up analytics and monitoring
---
## Production Checklist
- [ ] Error handling implemented with try-catch blocks for all operations
- [ ] Rate limiting configured (10-100 requests/minute)
- [ ] Token budget set with maximum token limits per conversation (4,000-8,000 tokens)
- [ ] Timeout configured (30-60 seconds for execution, 5-10 seconds for tool calls)
- [ ] Structured logging set up for all interactions
- [ ] Monitoring configured with metrics for success rate, latency, token usage
- [ ] Security with signature verification for webhooks
- [ ] Input validation and sanitization implemented
- [ ] Output filtering to remove sensitive data (PII, secrets)
- [ ] Cost tracking configured to monitor API costs per conversation
- [ ] Memory management with context window (10-20 messages)
- [ ] Fallback mechanisms for failures
- [ ] Retry logic with exponential backoff
- [ ] Observability with tracing and correlation IDs
---
## Anti-patterns
1. **No Signature Verification**: Webhooks without signature verification allow unauthorized requests. Always verify X-Line-Signature header.
2. **No Error Handling**: Failing silently crashes the bot and frustrates users. Implement graceful error handling with user-friendly messages.
3. **No Rate Limiting**: Uncontrolled API usage leads to cost blowout. Implement per-user rate limits with Redis.
4. **Infinite Loops**: Chatbot enters endless loops without exit. Set max_iterations and timeout.
5. **Ignoring Tool Errors**: Tool failures crash the bot. Wrap tools in try-catch with fallback responses.
6. **Hardcoding API Keys**: Keys in code expose security vulnerabilities. Use environment variables or secret managers.
7. **No Observability**: Lack of logging makes debugging impossible. Add structured logging with correlation IDs.
8. **Skipping Validation**: Not validating inputs causes crashes and data corruption. Implement schema validation.
9. **Poor Prompt Design**: Vague prompts cause hallucinations. Use specific, testable prompts with examples.
10. **Single Point of Failure**: No redundancy causes service outages. Deploy multiple instances behind a load balancer.
---
## Integration Points
- **LLM Integration**: [`langchain-patterns`](../../06-ai-ml-production/langchain-patterns/SKILL.md) - Setting up LLM providers
- **Chatbot Integration**: [`chatbot-integration`](../chatbot-integration/SKILL.md) - Backend chatbot logic
- **Conversational UI**: [`conversational-ui`](../conversational-ui/SKILL.md) - UI patterns
- **Error Handling**: [`error-handling`](../../03-backend-api/error-handling/SKILL.md) - Production error patterns
- **Thai Language Support**: [`multi-language`](../../25-internationalization/multi-language/SKILL.md) - Localization
---
## Further Reading
- [LINE Messaging API Documentation](https://developers.line.biz/en/docs/messaging-api/) - Official LINE API docs
- [LINE Bot SDK Python](https://github.com/line/line-bot-sdk-python) - Python SDK documentation
- [LINE Bot SDK Node.js](https://github.com/line/line-bot-sdk-nodejs) - Node.js SDK documentation
- [OpenAI API Documentation](https://platform.openai.com/docs/) - OpenAI API reference
- [Thai NLP Resources](https://github.com/PyThaiNLP/pythainlp) - Thai NLP tools
This skill integrates AI-powered chatbots and services with the LINE messaging platform to enable messaging, rich menus, quick replies, Flex messages, webhooks, and analytics. It focuses on practical patterns for the Thai market, LLM integration, conversation memory, and production-ready operational controls. The deliverable shows how to connect a Python bot to LINE, call an LLM, manage state, and deploy securely.
The skill wires LINE webhooks to a lightweight server (Flask or Express) to receive events and verify signatures. Incoming messages are optionally passed to an LLM (OpenAI) with conversation history stored in Redis for multi-turn context. Responses are sent back via the LINE Bot API using text, image, quick replies, Flex messages, or rich menus. Logging, monitoring, rate limits, and security checks close the loop for production use.
How do I secure my webhook endpoint?
Verify the X-Line-Signature header on every request, use HTTPS, and restrict incoming IPs if possible. Store channel secrets securely and rotate them periodically.
How do I manage multi-turn conversations without exceeding token limits?
Keep a sliding window of recent messages in Redis (10–20 messages), summarize older context when needed, and set model token caps with truncation or system-level summaries.