Role & responsibilities
Dear Aspirants,
Greetings from Adelik business solutions.
We are hiring for senior python engineer for one of our esteemed service based client.
Location:Chennai
Work mode:In office
Years of experience: 10-14 years.
Notice period:Immediate-45 days
Salary:Best in the Industry
Job Title: SENIOR PYTHON ENGINEER
Required Skills and Qualifications
- 10+ years of professional software development experience, with a strong emphasis on Python for building enterprise-scale solutions.
- Expert-level Python with FastAPI/Flask; ability to design, build, and operate distributed, event-driven microservices using Kafka, RabbitMQ, or Azure Service Bus.
- Deep understanding of asynchronous programming (async/await), concurrency patterns, and distributed systems concepts (eventual consistency, idempotency, state management).
- Proven experience in reverse engineering legacy codebases with minimal documentation, including analyzing compiled code, database schemas, stored procedures, and identifying undocumented business rules through code analysis and data pattern investigation.
- Ability to work with decompilers, debuggers, and profiling tools to understand legacy system behavior when source code or documentation is unavailable.
- Strong analytical skills and experience with algorithms, data structures, and solving complex logical problems.
- Advanced proficiency in analytics and forecasting using Python, including time-series forecasting (ARIMA, Prophet, LSTM), statistical modelling and hypothesis testing, and experience with sci-kit-learn, statsmodels, or similar machine learning frameworks.
- Data analysis proficiency with Python (Pandas/NumPy), time-series analysis, ETL pipeline design for analytical workloads, and ability to design experiments and interpret results.
- Extensive experience processing large-scale datasets (multi-GB to TB) using distributed processing frameworks (Dask, Ray, PySpark) and streaming data pipelines handling 10K+ events/second.
- Proficiency in data partitioning, sharding strategies, pagination for massive datasets, and knowledge of columnar storage formats (Parquet, Arrow) and data lake architectures.
- Experience in performance profiling and optimization in Python (cProfile, Pyflame, Py-Spy) across microservices and event pipelines, including async I/O analysis, memory/CPU hotspot detection, SQL/query tuning, and data-intensive scenarios.
- Proven experience building resilient backend jobs and schedulers (cron, Celery/Celery Beat, Airflow), including idempotent batch processing, retries with exponential backoff, circuit breaker patterns, and operational monitoring.
- Strong proficiency with relational databases (e.g., PostgreSQL, SQL Server) including complex queries, transaction management, experience working with read replicas, and database optimization for analytical queries (indexing strategies, materialized views, query optimization).
- Demonstrated ability to understand and translate complex business requirements from technical documentation or subject matter experts into functional code.
- Knowledge of containerization (Docker) and container orchestration with Kubernetes, including CI/CD pipelines and deployment automation.
- Excellent collaboration and communication skills, with the ability to work effectively with both technical and non-technical stakeholders and mentor junior developers.
- Experience documenting reverse-engineered logic and creating architectural diagrams from existing systems.
Preferred/Recommended Skills
- Experience in supply chain, logistics, manufacturing, warehousing, inventory management, or e-commerce domains, especially with complex resource management systems.
- Direct experience migrating legacy applications (especially from platforms like IBM i/AS400, mainframes, or languages like RPG/COBOL) to modern technology stacks using incremental modernization approaches.
- Experience building and maintaining applications on cloud platforms (AWS, Azure, GCP), including infrastructure-as-code tools and service mesh technologies (Istio, Linkerd).
- Experience with data warehousing concepts and tools (Snowflake, Redshift, BigQuery).
- Familiarity with workflow orchestration tools beyond Airflow (Prefect, Dagster).
- Knowledge of caching strategies (Redis, Memcached) for high-volume read scenarios.
- Experience with observability and monitoring tools (Prometheus, Grafana, ELK stack) for monitoring data pipelines and microservices at scale.
- Familiarity with message queues, publish-subscribe patterns, event sourcing, and API design best practices (REST/gRPC).
- Track record of technical mentorship or leading small engineering teams.
- Contributions to open-source projects or active participation in technical communities.
Interested candidates can apply