Skip to content

Messaging

The Messaging feature provides a real-time chat interface for agents to communicate with customers, supporting text messages, media attachments, voice messages, and quick replies.

User Story: As a support agent, I want to send and receive messages in real-time so that I can provide immediate assistance to customers.

Acceptance Criteria:

  • ✅ Display conversation messages in chronological order
  • ✅ Send text messages
  • ✅ Send media attachments (images, videos, files)
  • ✅ Send voice messages
  • ✅ Use canned responses for quick replies
  • ✅ Show typing indicators
  • ✅ Show message delivery status
  • ✅ Mark messages as read automatically
  • ✅ Support message editing and deletion
  • ✅ Display rich message content (links, markdown)

Inputs:

  • Conversation ID
  • Message content (text, media, attachments)
  • Message metadata (private notes, canned response ID)

Outputs:

  • Real-time message list
  • Message sent confirmation
  • Delivery and read receipts
  • Typing indicators
User Opens Chat Screen
Extract conversationId from Route Params
Dispatch: fetchConversation(conversationId)
API Request: GET /conversations/{id}
Transform Response (snake_case → camelCase)
Store Messages in Redux
Render Message List
Mark Unread Messages as Read
Scroll to Bottom (Latest Message)
User Types Message
User Presses Send Button
Validate Message Content
Generate Temporary Message ID
Optimistically Add to Redux Store
Display Message in List (Sending State)
Dispatch: sendMessage({ conversationId, content })
API Request: POST /messages
├─ Success
│ ↓
│ Update Message with Server ID
│ ↓
│ Update Status to 'sent'
│ ↓
│ Clear Input Field
└─ Failure
Update Message Status to 'failed'
Show Retry Option
User Taps Attachment Button
Show Media Picker (Camera/Gallery/Files)
User Selects Media
Validate File Size and Type
Generate Preview Thumbnail
Show Upload Progress
Upload to Server (Multipart Form)
Receive Media URL
Send Message with Attachment URL
Display in Message List
User Taps Microphone Button
Request Microphone Permission
Start Audio Recording
Show Recording UI with Timer
User Stops Recording
Generate Audio File
Show Playback Preview
User Confirms Send
Upload Audio File
Send Message with Audio Attachment
WebSocket Event: message.created
ActionCable Receives Event
Extract Message Data
Transform to camelCase
Dispatch: addOrUpdateMessage(message)
Reducer Adds to messages.byId
Update conversationId → messageIds mapping
Update Conversation Last Activity
Re-render Message List
If Chat Screen Active
Mark as Read Immediately

Edge Cases:

  • Network loss during message send
  • Duplicate messages from retry logic
  • Messages arriving out of order
  • Large media file uploads
  • Microphone permission denied
  • Empty message submission blocked

Request:

GET /api/v1/accounts/{accountId}/conversations/{conversationId}

Response:

{
data: {
meta: {
sender: Contact;
assignee: Agent | null;
channel: string;
};
id: number;
status: string;
messages: Message[];
unread_count: number;
}
}
interface Message {
id: number;
content: string;
message_type: 0 | 1 | 2; // 0=incoming, 1=outgoing, 2=activity
created_at: number;
private: boolean;
attachments: Attachment[];
sender: {
id: number;
name: string;
thumbnail?: string;
type: 'contact' | 'user';
};
conversation_id: number;
content_attributes?: {
email?: EmailMetadata;
items?: unknown[];
};
}
interface Attachment {
id: number;
file_type: 'image' | 'video' | 'audio' | 'file';
data_url: string;
thumb_url?: string;
}

Request:

POST /api/v1/accounts/{accountId}/conversations/{conversationId}/messages
Body:
{
content: string;
private?: boolean;
message_type?: 'outgoing';
attachments?: File[];
}

Response:

{
id: number;
content: string;
message_type: number;
created_at: number;
conversation_id: number;
sender: {
id: number;
name: string;
};
attachments: Attachment[];
}

Fetch Conversation:

conversationActions.fetchConversation({
conversationId: number
})

Send Message:

sendMessageActions.sendMessage({
conversationId: number,
content: string,
isPrivate?: boolean,
echoId?: string // For optimistic updates
})

Send Attachment:

sendMessageActions.sendAttachment({
conversationId: number,
attachment: File,
isPrivate?: boolean
})

Toggle Typing:

conversationActions.toggleTyping({
conversationId: number,
isTyping: boolean
})
// Get messages for conversation
const messages = useAppSelector(state =>
selectMessagesByConversationId(state, conversationId)
);
// Get conversation
const conversation = useAppSelector(state =>
selectConversationById(state, conversationId)
);
// Get typing users
const typingUsers = useAppSelector(state =>
selectTypingUsers(state, conversationId)
);
// Get message send state
const sendState = useAppSelector(state =>
state.sendMessage
);
try {
await dispatch(sendMessageActions.sendMessage({
conversationId,
content,
echoId: tempMessageId,
})).unwrap();
} catch (error) {
// Update message status to failed
dispatch(updateMessageStatus({
echoId: tempMessageId,
status: 'failed',
}));
// Show retry option
showToast({
message: I18n.t('ERRORS.MESSAGE_SEND_FAILED'),
action: {
label: I18n.t('RETRY'),
onPress: () => retryMessage(tempMessageId),
},
});
}
const uploadAttachment = async (file: File) => {
const maxRetries = 3;
let attempt = 0;
while (attempt < maxRetries) {
try {
return await sendMessageActions.sendAttachment({
conversationId,
attachment: file,
});
} catch (error) {
attempt++;
if (attempt === maxRetries) {
throw error;
}
await delay(1000 * attempt); // Exponential backoff
}
}
};
const queuedMessages: QueuedMessage[] = [];
const sendMessage = async (message: MessagePayload) => {
if (!isOnline) {
queuedMessages.push(message);
showToast({ message: I18n.t('MESSAGE_QUEUED') });
return;
}
// Send immediately
await dispatch(sendMessageActions.sendMessage(message));
};
// Flush queue when online
NetInfo.addEventListener(state => {
if (state.isConnected && queuedMessages.length > 0) {
queuedMessages.forEach(sendMessage);
queuedMessages.length = 0;
}
});
// Message sent
AnalyticsHelper.track('message_sent', {
conversation_id: conversationId,
message_type: 'text',
is_private: false,
character_count: content.length,
});
// Attachment sent
AnalyticsHelper.track('attachment_sent', {
conversation_id: conversationId,
file_type: 'image',
file_size: fileSize,
});
// Voice message sent
AnalyticsHelper.track('voice_message_sent', {
conversation_id: conversationId,
duration_seconds: duration,
});
// Canned response used
AnalyticsHelper.track('canned_response_used', {
conversation_id: conversationId,
canned_response_id: cannedResponseId,
});
  • Message send latency
  • Message render time
  • Scroll performance (FPS)
  • Attachment upload speed
  • Voice recording quality
import React, { useState } from 'react';
import { View, TextInput, TouchableOpacity, Text } from 'react-native';
import { useAppDispatch } from '@/hooks';
import { sendMessageActions } from '@/store/conversation/sendMessageSlice';
export const MessageInput = ({ conversationId }: { conversationId: number }) => {
const dispatch = useAppDispatch();
const [message, setMessage] = useState('');
const [isSending, setIsSending] = useState(false);
const handleSend = async () => {
if (!message.trim() || isSending) return;
setIsSending(true);
try {
await dispatch(sendMessageActions.sendMessage({
conversationId,
content: message,
})).unwrap();
setMessage('');
} catch (error) {
console.error('Failed to send message:', error);
} finally {
setIsSending(false);
}
};
return (
<View style={{ flexDirection: 'row', padding: 10 }}>
<TextInput
value={message}
onChangeText={setMessage}
placeholder="Type a message..."
multiline
style={{ flex: 1, marginRight: 10 }}
/>
<TouchableOpacity
onPress={handleSend}
disabled={!message.trim() || isSending}>
<Text>{isSending ? 'Sending...' : 'Send'}</Text>
</TouchableOpacity>
</View>
);
};
import { MessageItem } from './MessageItem';
import { TypingIndicator } from './TypingIndicator';
export const MessagesList = ({ conversationId }: { conversationId: number }) => {
const messages = useAppSelector(state =>
selectMessagesByConversationId(state, conversationId)
);
const typingUsers = useAppSelector(state =>
selectTypingUsers(state, conversationId)
);
return (
<FlatList
data={messages}
renderItem={({ item }) => <MessageItem message={item} />}
keyExtractor={item => item.id.toString()}
ListFooterComponent={
typingUsers.length > 0 ? (
<TypingIndicator users={typingUsers} />
) : null
}
inverted // Newest messages at bottom
/>
);
};
import { launchImageLibrary } from 'react-native-image-picker';
export const AttachmentButton = ({ conversationId }: { conversationId: number }) => {
const dispatch = useAppDispatch();
const handleAttachment = async () => {
const result = await launchImageLibrary({
mediaType: 'mixed',
quality: 0.8,
});
if (result.assets && result.assets[0]) {
const file = result.assets[0];
await dispatch(sendMessageActions.sendAttachment({
conversationId,
attachment: {
uri: file.uri,
name: file.fileName,
type: file.type,
},
}));
}
};
return (
<TouchableOpacity onPress={handleAttachment}>
<AttachmentIcon />
</TouchableOpacity>
);
};
// Voice messages (beta feature)
if (featureFlags.voiceMessages) {
return <VoiceRecorderButton />;
}
// Rich text editor (experimental)
if (featureFlags.richTextEditor) {
return <RichTextInput />;
}
  1. Phase 1: Basic text messaging
  2. Phase 2: Image attachments
  3. Phase 3: Voice messages
  4. Phase 4: File attachments
  5. Phase 5: Rich text editing