daily reports automated
Backend Engineering / Data Systems / Automation
Akilan Manikandan
Software engineer with internship experience across backend infrastructure, workflow automation, and data-intensive applications. I care about reliable systems, strong engineering fundamentals, and building software that is simple to operate at scale.
Current focus
- Backend services and platform-oriented APIs
- Scalable automation and orchestration workflows
- Data pipelines with strong reliability and observability
pipeline success rate achieved
internship experiences across backend, AI, and automation
About
Professional Summary
Computer Science undergraduate with hands-on internship experience building backend services, automation platforms, and production-facing data workflows. My interest is in well-engineered systems: clear APIs, dependable execution paths, maintainable abstractions, and software that can scale without losing operational clarity.
SRM Institute of Science and Technology
B.Tech in Computer Science and Engineering, 2022-2026
CGPA: 8.35 / 10.0Python, Java, FastAPI, Django, PostgreSQL
Additional exposure across Spring Boot, Node.js, MongoDB, AWS, Docker, and ETL systems.
Chennai, India
Seeking software engineering roles in backend, data platforms, automation, and infrastructure-oriented product teams.
Experience
Internships
Workflow Automation & Data Platforms
Contributed to production-facing automation and data platform workflows with responsibility across orchestration, ETL design, execution reliability, and reporting system scalability.
- Engineered a multi-tenant, configuration-driven automation platform that processed 50+ daily reports across 20+ branches, reducing manual reporting effort by 80%.
- Designed metadata-driven orchestration for 100+ pipelines with retry handling, dependency sequencing, and validation checkpoints, improving failure recovery time by 50% and maintaining a 95%+ success rate.
- Implemented end-to-end ETL pipelines across Python, PostgreSQL, Amazon S3, and Power BI, enabling near real-time reporting and improving data availability turnaround by 70%.
Backend Infrastructure
Worked on backend platform development for enterprise workflow systems, with a focus on API design, database efficiency, and application reliability.
- Built and extended RESTful backend services using Django REST Framework and PostgreSQL to support scalable workflow operations and business-critical product features.
- Improved responsiveness and system stability by debugging backend and database bottlenecks, strengthening data consistency and reducing production defects.
- Accelerated feature delivery by 60% through structured AI-assisted development workflows, shortening implementation and debugging cycles across backend modules.
Machine Learning Systems
Contributed to applied machine learning workflows for speech-based emotion analysis, with emphasis on feature engineering, model quality, and rapid experimentation.
- Developed components for an AI-driven speech emotion recognition pipeline using LSTM models and MFCC-based feature extraction for multi-class audio classification.
- Improved model accuracy to 97% through iterative preprocessing, feature engineering, and model tuning across the training workflow.
- Increased development velocity by 70% through rapid prototyping and AI-assisted experimentation within a 6-member engineering team.
Enterprise Backend Applications
Supported backend feature development for an enterprise HRMS application, working on role-based workflows, team delivery, and release quality in an Agile environment.
- Built backend functionality for a multi-role HRMS platform serving Admin, HR, Employee, and Client workflows, contributing to an estimated 30% improvement in operational efficiency.
- Collaborated in a 4-member Agile team to deliver sprint-based features and maintain stable integration across shared application modules.
- Improved deployment efficiency by 20% through stronger version control practices and reduced merge conflicts during release cycles.
Projects
Selected projects in backend, automation, and applied systems engineering
How It Works
Files move through a controlled pipeline covering upload, encryption, key exchange, policy validation, access control, audit logging, and monitored retrieval. Separate services manage storage security, sharing workflows, and anomaly detection.
What It Solves
It addresses insecure document sharing in distributed environments by enforcing strong access boundaries, protecting sensitive data, and improving traceability for every file-level operation.
Why It Scales
The architecture separates security, monitoring, and file operations into service layers, making it easier to support more users, more policies, and higher traffic without reworking the core authorization model.
How It Works
Event data is collected through ingestion pipelines, normalized, stored in MongoDB, and exposed through FastAPI endpoints. A recommendation layer then maps user intent to relevant events using contextual query handling.
What It Solves
It reduces the friction of event discovery by replacing fragmented listings with a single recommendation flow that can answer user queries and return more relevant options quickly.
Why It Scales
Ingestion, storage, API delivery, and recommendation logic are separated into distinct layers, allowing the system to support larger event catalogs, more users, and richer recommendation rules without tightly coupling the stack.
How It Works
The pipeline preprocesses audio, extracts MFCC features, and feeds the sequential representation into an LSTM model trained to classify emotional states from speech samples.
What It Solves
It enables systems to interpret emotional cues in speech, which is useful for voice interfaces, assistive technologies, and conversational systems that require more context than transcription alone.
Why It Scales
The workflow is modular: feature extraction, model training, and inference can be improved independently, making it easier to expand datasets, add emotion classes, or package the model as a lightweight inference service.
How It Works
The workflows ingest source data, split processing by dealership branch, transform records into standardized outputs, and generate companion "last updated" snapshots so each client can track freshness alongside branchwise reporting outputs.
What It Solves
They remove the manual overhead of preparing branch-level reporting files and verifying which datasets are current, giving operations teams a repeatable way to monitor branch performance and data freshness.
Why It Scales
Because the workflows are configuration-driven and reusable per client, the same pattern can be extended across more dealerships, brands, and branches with minimal change beyond source mapping and output configuration.
Certifications
Courses and certifications that shaped my backend, data, and software engineering foundation
Claude Code in Action
Anthropic
AI Fluency for Students
Anthropic
NLP for ML with Python: Advanced NLP Using spaCy & Scikit-learn
Skillsoft
Python for Data Science
NPTEL
Java Full Stack
Simbiotik Technologies
Software Development
Prodigy InfoTech
DBMS Completion
Udemy
Digital Electronics
Infosys Springboard
Python
Infosys Springboard
Java Basics
HackerRank
Artificial Intelligence
Infosys Springboard
Machine Learning
Infosys Springboard
Capabilities
Technical areas I work in most
Backend Engineering
FastAPI, Django REST Framework, Spring Boot, Node.js, NestJS, REST API design, service-oriented architecture
Data Platforms & Cloud
PostgreSQL, MongoDB, SQL, ETL/ELT workflows, AWS EC2/S3/Lambda/SQS, Docker, Linux
Automation & Applied ML
n8n, Playwright, TagUI, Scikit-learn, TensorFlow, PyTorch, model experimentation and workflow automation
Contact
Interested in backend, platform, and automation-focused engineering roles.
I am looking for opportunities where I can contribute to well-engineered software, learn from strong teams, and grow as a backend and systems-focused engineer.