Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
18 - 30 Lacs
hyderabad
Hybrid
Position: Senior Python Developer with MLOps platform Location: Hyderabad, India (Need to work from Experian- Hyderabad office at least 2 to 3 days in a week from day one) No of Open Positions: 1 Experience:6+ Years Summary Design & build backend components of MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. Knowledge, Skills and Experience Python Programming AWS Services Flask Spark Programming knowledge (Pyspark/Scala) Async Programming boto3 Must Have Skills: Must have 5 to 9 years of working experience in Python programming with flask, FastAPI Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Should be a team player working with Cross Geography team Nice to have skills: Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platforms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with container platforms such as AWS ECS or AWS EKS. Experience with DevOps & IAC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda.
Posted 6 days ago
3.0 - 5.0 years
5 - 9 Lacs
bengaluru
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering, Python (Programming Language), Amazon Web Services (AWS), PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Data Engineer with a minimum of 3+ years of experience. The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets to meet reporting requirements and deliver use cases for the Castrol business.At least 3+ years of Strong hands-on programming experience with pySpark / Python / Boto3 including Python Frameworks, libraries according to python best practices.Development experience in AWS services mainly on Lambda, Step function, Redshift, Glue.Strong experience in code optimisation using spark SQL and pyspark. Understanding of Code versioning ,Git repository , JFrog Artifactory.AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etcShould have good knowledge on building and deploying python applicationsExperience of writing complex pipelines with huge volumes of data.Ability to debug pyspark code Communicate with a team in order to coordinate and document application development and testing. Qualification 15 years full time education
Posted 1 week ago
6.0 - 11.0 years
12 - 20 Lacs
chennai
Hybrid
Position Overview We are seeking a highly skilled Senior Data Engineer to join our data platform team, responsible for designing and implementing scalable, cloud-native data solutions. This role focuses on building modern data infrastructure using AWS and Azure services to support our growing analytics and machine learning initiatives. Key Responsibilities Business Translation : Collaborate with business stakeholders to understand requirements and translate them into scalable data models, architectures, and robust end-to-end data pipelines ETL/ELT Implementation : Design, develop, and maintain high-performance batch and real-time data pipelines using modern frameworks like Apache Spark, Delta Lake, and streaming technologies Data Platform Management : Architect and implement data lakes, data warehouses, and Lakehouse architectures following medallion architecture principles (Bronze, Silver, Gold layers) Data Operations & Quality Pipeline Orchestration : Implement complex workflow orchestration using tools like AWS Step Functions or Azure Data Factory Data Quality Assurance : Establish comprehensive data validation, monitoring, and quality frameworks using tools like Great Expectations or custom Python solutions Performance Optimization : Monitor, troubleshoot, and optimize data pipeline performance, implementing partitioning strategies and query optimization techniques Data Governance : Implement data lineage tracking, metadata management, and ensure compliance with data privacy regulations (GDPR) Cloud Infrastructure & DevOps Experience & Background 6+ years of hands-on experience in Data Engineering roles with demonstrated expertise in cloud-native data solutions Cloud Expertise : Strong preference for AWS/Azure data services Production Experience : Proven track record of building and maintaining production-grade data pipelines processing terabytes of data Technical Skills Programming & Development Python : Advanced proficiency in Python for data engineering (pandas, numpy, boto3, azure-sdk) SQL : Expert-level SQL skills including complex queries, window functions, CTEs, and performance tuning PySpark : Hands-on experience with PySpark for large-scale data processing and optimization Version Control : Proficient with Git workflows, branching strategies, and collaborative development AWS Data Services (Primary Focus) Compute : Experience working on services like AWS Glue, Lambda Azure Data Factory, Synapse Analytics, Data Lake Storage Gen2 , S3 Redshift, RDS, DynamoDB Analytics : Power BI, Athena, Kinesis (Data Streams, Data Firehose, Analytics) Security : IAM, KMS, VPC, Security Groups for secure data access patterns Soft Skills & Collaboration Problem-Solving : Strong analytical and troubleshooting skills with attention to detail Communication : Excellent verbal and written communication skills for cross-functional collaboration Project Management : Ability to manage multiple priorities and deliver projects on time Mentorship : Experience mentoring junior engineers and promoting best practices Preferred Qualifications (Nice to Have) CI/CD Pipelines: Experience with GitHub Actions, Jenkins, or Azure DevOps for automated testing and deployment Infrastructure as Code: Terraform or AWS CDK for infrastructure automation Containerization: Docker and Kubernetes for containerized data applications Testing Frameworks: Unit testing, integration testing, and data quality testing practices Cloud Certifications: AWS Certified Data Analytics, Azure Data Engineer Associate, or similar certifications Agile Methodologies: Experience working in Scrum/Kanban environments What We Offer Opportunity to work with cutting-edge data technologies and modern cloud platforms Collaborative environment with opportunities for professional growth and learning Flexible work arrangements and comprehensive benefits package Conference attendance and certification reimbursement programs This role requires the ability to work independently while collaborating effectively with cross-functional teams including Data Scientists, Analytics Engineers, Software Engineers, and Business Stakeholders. Bottom of Form
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Software Development Engineer 1 focused on Cloud Automation at Astuto, you will play a crucial role in developing advanced workflows and resource schedulers to enhance cloud efficiency and drive cost savings. Your expertise in Infrastructure as Code, Custom Automation & Runbooks, and AWS API & SDK Integration will directly contribute to achieving operational excellence across AWS estates. In this role, you will design, author, and maintain Terraform modules and CloudFormation templates for changemanagement tasks, ensuring idempotent changes that are safely applied. Additionally, you will develop Python scripts and AWS Lambda functions to orchestrate resource queries, scheduling, remediation, and cleanup tasks, packaging them as serverless workflows for nonstandard use cases. Your responsibilities will also include leveraging AWS SDKs to build bespoke tooling, implementing error handling, retry logic, logging, and notifications for all automated operations. With a strong background in AWS services like EC2, S3, RDS, IAM, Lambda, Step Functions, CloudWatch, and CloudTrail, you will bring hands-on expertise to the team. The ideal candidate for this role will have at least 3 years of AWS experience, proficiency in writing Terraform modules and CloudFormation stacks for production workloads, and strong Python scripting skills using boto3/AWS CLI for automating AWS API calls. Experience in designing and operating AWS Lambda functions and Step Functions workflows, along with a deep understanding of AWS resource lifecycles, tagging strategies, and cost drivers, is essential. Moreover, solid software engineering fundamentals, excellent communication skills, and a penchant for documenting changes will be key attributes for success in this role. Join Astuto, a SOC 2 Type II compliant and ISO 27001 certified company, to contribute to cloud cost management and optimization while leveraging AI-driven actionable insights through the flagship product OneLens.,
Posted 2 weeks ago
8.0 - 10.0 years
0 - 1 Lacs
kochi, chennai, bengaluru
Hybrid
Design & implement GenAI summarization workflows, deploy LLMs on AWS, build embedding pipelines, integrate vector DBs, develop APIs, ensure scalability/security, collaborate with stakeholders.
Posted 3 weeks ago
8.0 - 12.0 years
7 - 11 Lacs
chennai
Work from Office
Position: Senior secops engineer Candidate Skill: Technical SkillsAWS, IAM, S3, AWS Security Groups, NACL, IGW, VPC, VPC Network Firewall, Endpoints, JSON IAM Policies, Scripting (Bash, Python), AWS API (boto3, AWS CLI), TCP/IP Networking, Cloud Infrastructure Security, OS Patch Management, Backup, Secure Logging, User Account Creation, CI/CD Pipeline, Cloud Security Technologies Experience: 8+ years Job Description: As a SecOps Engineer, you will be responsible for ensuring the security and compliance of our systems and infrastructure You will work closely with our development, architecture and DevOps teams to identify and remediate vulnerabilities, implement security best practices, automate security processes and ensure compliance with corporate and industry standards You will also conduct security assessments of our systems and infrastructure to identify vulnerabilities and risks, identify risk owners and implement mitigating controls Required Skills/ExperienceMust have 5 or more years of relevant job experience on AWSMust be proficient in at least one scripting language such as bash, python, etc Experience with remediating security in IAM, S3, AWS Security Groups, NACL, IGW, VPC Network Firewall, VPC, Endpoints and other AWS resourcesExpertise in writing JSON IAM and S3 policies deep understanding of AWS policy languageExperience in AWS account security auditingExperience in scripting using the AWS APIs (boto3 or AWS cli)Good understand of TCP/IP networking principles and protocolsExperience maintaining secure and reliable cloud infrastructure (OS patch, backup, monitoring, secure logging, and user account creation)Support or modify underlying AWS infrastructure and services for security hardeningFamiliar with cloud deployment automation, as well as build CI/CD pipeline to support cloud-based workloadsStay on top of the latest AWS security trends and develop expertise in emerging cloud security technologiesDevelop and maintain technical documentation in Atlassian ConfluenceProficient using Atlassian Jira ticketing and project managementExperience troubleshooting technical security issuesMust be proficient in both verbal and written communication in EnglishMust be available to work mostly on Pacific daytime hours (until to 2 pm Pacific time) Desired Skills/Experience AWS Security-Specialty certification with minimum 3 years practical experience securing AWS environmentsMasters/Bachelors degree in Computer Science, Computer Engineering, Electrical Engineering, or related technical field, and two years of experience in related software or systems 8+ years overall industry experience
Posted 3 weeks ago
5.0 - 7.0 years
27 - 30 Lacs
hyderabad, chennai
Work from Office
Experience required: 7+ years Core Generative AI & LLM Skills: * 5+ years in Software Engineering, 1+ year in Generative AI. * Strong understanding of LLMs, prompt engineering, and RAG. * Experience with multi-agent system design (planning, delegation, feedback). * Hands-on with LangChain (tools, memory, callbacks) and LangGraph (multi-agent orchestration). * Proficient in using vector DBs (OpenSearch, Pinecone, FAISS, Weaviate). * Skilled in Amazon Bedrock and integrating LLMs like Claude, Titan, Llama. * Strong Python (LangChain, LangGraph, FastAPI, boto3). * Experience building MCP servers/tools. * Designed robust APIs, integrated external tools with agents. * AWS proficiency: Lambda, API Gateway, DynamoDB, S3, Neptune, Bedrock Agents * Knowledge of data privacy, output filtering, audit logging * Familiar with AWS IAM, VPCs, and KMS encryption Desired Skills: * Integration with Confluence, CRMs, knowledge bases, etc. * Observability with Langfuse, OpenTelemetry, Prompt Catalog * Understanding of model alignment & bias mitigation
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Why Join Us Are you inspired to grow your career at one of Indias Top 25 Best Workplaces in IT industry Do you want to do the best work of your life at one of the fastest growing IT services companies Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations Its happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our clients most trusted technology partner, and the first choice for the industrys top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about Being Your Best as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. Were a place where everyone can discover and be their best version. Job Description Automation test engineer with experience using AWS and scripting in Python Knowledge of Boto3 framework is required Test engineer should be able to test Infrastructure provisioned using CDK (created and deleted) and also test the full pipeline; scripts to test the persona (role Experience Required: 5 - 8 Yrs Involves execution of testing, monitoring and operational activities of various complexity based on assigned portfolio ensuring adherences to established service levels and standards. Executes identified test programs for a variety of specializations to support effective testing & monitoring of controls within business groups and across the Bank. Understands the business/group strategy and develops and maintains knowledge of end to end processes. Executes testing activities and any other operational activities within required service level agreements or standards. Develops knowledge related to program and/or area of specialty. Develops and maintains effective relationships with internal & external business partners/stakeholders to execute work and fulfill service delivery expectations. Participates in planning and implementation of operational programs and executes within required service level agreements and standards. Supports change management of varying scope and type; tasks typically focused on execution and sustainment activities. Executes various operational activities/requirements to ensure timely, accurate, and efficient service delivery. Ensures consistent, high quality practices/work and the achievement of business results in alignment with business/group strategies and with productivity goals. Analyzes automated test results and provides initial feedback on test results. Analyzes root causes of any errors discovered to provide for effective communication of issues to appropriate parties. Develops insights and recommends continuous improvement insights based on test results. Creates and maintains adequate monitoring support documentation, such as narratives, flowcharts, process flows, testing summaries, etc. to support the results of the reviews, including the write up of findings/issues for reporting. Mandatory Competencies QE - Test Automation Preparation Beh - Communication QA/QE - QA Automation - Python Data Science and Machine Learning - Data Science and Machine Learning - Python Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Perks And Benefits For Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we&aposre committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness. Show more Show less
Posted 3 weeks ago
0.0 - 5.0 years
4 - 9 Lacs
thane
Remote
Testing and debugging applications. Developing back-end components. Integrating user-facing elements using server-side logic. Assessing and prioritizing client feature requests. Integrating data storage solutions. Required Candidate profile Knowledge of Python and related frameworks including Django and Flask. A deep understanding and multi-process architecture and the threading limitations of Python. Perks and benefits Flexible hours. Remote work options.
Posted 3 weeks ago
3.0 - 8.0 years
15 - 25 Lacs
gurugram
Work from Office
Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 3 weeks ago
4.0 - 6.0 years
12 - 18 Lacs
gurugram
Work from Office
Role Description : As a Senior Software Engineer - AWS Python at Incedo, you will be responsible for developing and maintaining applications on the Amazon Web Services (AWS) platform. You will be expected to have a strong understanding of Python and AWS technologies, including EC2, S3, RDS, and Lambda. Roles & Responsibilities: Writing high quality code, participating in code reviews, designing systems of varying complexity and scope, and creating high quality documents substantiating the architecture. Engaging with clients, understanding their technical requirements, planning and liaising with other team members to develop technical design & approach to deliver end-to-end solutions. Mentor & guide junior team members, review their code, establish quality gates, build & deploy code using CI/CD pipelines, apply secure coding practices, adopt unit-testing frameworks, provide better coverage, etc. Responsible for teams growth. Technical Skills : Must Have : Python, FAST API, Uvicorn,SQLAlchemy,boto3,Lamdba server less, pymysql Nice to have : AWS lambda, Step functions, ECR, ECS,S3,SNS,SQS, Docker, CICD Proficiency in Python programming language Experience in developing and deploying applications on AWS Knowledge of serverless computing and AWS Lambda Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Nice to have : AWS lambda, Step functions, ECR, ECS,S3,SNS,SQS, Docker, CICD Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 3 weeks ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Senior Site Reliability Engineer - NOC ( Nasuni Orchestration Center ) About Nasuni Nasuni is a profitable, growing SaaS data infrastructure company reinventing enterprise file storage and data management in an AI-driven world. We power the data infrastructure of the worlds most innovative enterprises. Backed by Vista Equity Partners, our engineers arent working behind the scenes theyre building whats next with AI. Our platform lets businesses seamlessly store, access, protect, and unlock AI-driven insights from exploding volumes of unstructured file data. As an engineer here, youll help build AI-powered infrastructure trusted by 900+ global customers, including Dow, Mattel, and Autodesk. Nasuni is headquartered in Boston, USA with offices in Cork-Ireland, London-UK and we are starting an India Innovation Center in Hyderabad India to leverage exuberant IT talent available in India. The companys recent Annual Revenue at $160M and is growing at 25% CAGR. We have a hybrid work culture. 3 days a week working from the Hyderabad office during core working hours and 2 days working from home. Job Description We are excited to be growing the team that builds and maintains the Nasuni Orchestration Center and the SaaS portions of our product portfolio. This team provides the key services and supporting infrastructure in a modern cloud environment. Critical skills include familiarity with AWS, Linux systems, and Configuration Management, as well as the common DevOps activities, practices, and techniques found in highly automated environments. Candidates for this position will have supported high-scale REST API services as well as customer and internal-facing web applications. They understand the importance of quality and responsiveness in meeting customer expectations. Success in this position requires you to be a self-motivated team player and an open-minded individual contributor who can help the team reach its larger goals. Responsibilities Support, maintain and enhance cloud infrastructure through Terraform, CloudFormation, Puppet, and Python. Contribute to maturing the DevOps and SRE practices within Nasuni by utilizing methodologies such as CI/CD, Agile, and Acceptance Test Driven Development. Take or share ownership of one or more large areas of the Nasuni Orchestration Center hosted at AWS. Practice Root Cause Analysis to determine the scope and scale of issue impact. Create epics/stories and construct automation to prevent problem recurrence. Develop repeatable tools and processes for automation, configuration, monitoring, and alerting. Participate in requirements analysis, design, design reviews and other work related to expanding Nasuni&aposs functionality. Participate in 24/7 on-call rotation for production systems. Work with AWS technologies such as EC2, ECS, Fargate, Aurora, ElastiCache, DynamoDB, API Gateway, and Lambda. Be recognized as an expert in 1 or more technical areas. Respond to critical customer raised incidents in a timely manner, perform root cause analysis and implement preventative measures to avoid future incidents. Work with the team to implement industrys best practices for securing internet-facing applications. Continuously improve development processes, tools, and methodologies. Technical Skills Required 5+ years production experience in an SLA-driven SaaS environment. Significant experience with Configuration Management (Terraform/CloudFormation), Agile Scrum, and CI/CD. Experience building, measuring, tuning, supporting, and reporting on high-traffic web services. Demonstrated ability to build AMIs with tools like packer and containers with tools like Docker Compose. Comfort following versioning and release strategies with git/GitHub Action, AWS Code Build/Code Pipeline/Code Deploy. Experience debugging AWS or other cloud infrastructure. Competence with AWS API libraries (boto3), bash, awscli, and scripting with Python. Experience with infrastructure configuration management and automation tools (such as Puppet, Packer, CloudFormation, Terraform) as well as use of containers in CI/CD pipelines and production environments. Knowledge of the principles found in the Google SRE book and how to apply them. College experience in a related discipline (advanced degrees welcome). Bonus points for: Activity with open-source communities Familiarity with SQL and NoSQL databases AWS/Azure or other major cloud vendor certification Excellent problem solving and troubleshooting skills. Experience working in an agile development environment, and a solid understanding of agile methodologies. Strong communication skills. Demonstrable experience testing and asserting the quality of the work you produce through writing unit, integration and smoke tests. Experience BE/B.Tech, ME/M.Tech in computer science (or) Electronics and Communications (or) MCA 7 to 10 years previous experience in the industry. Why Work at Nasuni Hyderabad Benefits As part of our commitment to your well-being and growth, Nasuni offers competitive benefits designed to support every stage of your life and career: Competitive compensation programs Flexible time off and leave policies Comprehensive health and wellness coverage Hybrid and flexible work arrangements Employee referral and recognition programs Professional development and learning support Inclusive, collaborative team culture Modern office spaces with team events and perks Retirement and statutory benefits as per Indian regulations To all recruitment agencies: Nasuni does not accept agency resumes. Please do not forward resumes to our job boards, Nasuni employees or any other company location. Nasuni is not responsible for any fees related to unsolicited resumes. Nasuni is proud to be an equal opportunity employer. We are committed to fostering a diverse, inclusive, and respectful workplace where every team member can thrive. All qualified applicants will receive consideration for employment without regard to race, religion, caste, color, sex, gender identity or expression, sexual orientation, disability, age, national origin, or any other status protected by applicable laws in India or the country of employment. We celebrate individuality and are committed to building a workplace that reflects the diversity of the communities we serve. If you require accommodation during the recruitment process, please let us know. This privacy notice relates to information collected (whether online or offline) by Nasuni Corporation and our corporate affiliates (collectively, Nasuni) from or about you in your capacity as a Nasuni employee, independent contractor/service provider or as an applicant for an employment or contractor relationship with Nasuni. Show more Show less
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The company Infiniti Research is looking for an AWS DevOps Engineer to join their team in Bangalore. The ideal candidate should have 1.5 to 3 years of experience and be proficient in AWS cloud technologies, Docker, scripting, and automation. The role will involve managing cloud infrastructure, developing CI/CD pipelines, containerizing applications with Docker, scripting for automation, monitoring system performance, collaborating with development teams, and maintaining documentation. Key Responsibilities - Cloud Infrastructure Management: Design, implement, and maintain scalable and secure AWS infrastructure. - CI/CD Pipeline Development: Develop and manage pipelines to automate build, test, and deployment processes. - Containerization: Use Docker to containerize applications across different environments. - Scripting and Automation: Write and maintain Bash scripts for automation and system management. - Monitoring and Performance Optimization: Monitor system performance, troubleshoot issues, and implement optimizations. - Collaboration: Work closely with development teams to integrate workflows into the CI/CD process. - Documentation: Maintain clear and comprehensive documentation of infrastructure, processes, and configurations. Required Skills and Qualifications - AWS Expertise: Strong experience with AWS services and architecture best practices. - Docker Proficiency: Hands-on experience with Docker and Docker files. - Bash Scripting: Proven experience in writing and executing Bash scripts. - Python Knowledge: Familiarity with Python for scripting tasks is a plus. - AWS Certification: AWS certification preferred. - CI/CD Experience: Experience with CI/CD tools such as Azure Devops, Azure Pipelines, and AWS CodePipeline. Education and Experience - Bachelors degree in computer science, Engineering, or a related field. If you have any queries, please contact ramyasrikarthika@infinitiresearch.com or visit www.infinitiresearch.com.,
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Ahmedabad
Work from Office
As a Senior platform engineer, you are expected to design and develop key components that power our platform. You will be building a secure, scalable, and highly performant distributed platform that connects multiple cloud platforms like AWS, Azure, and GCP. Job Title: Sr. Platform Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Take full ownership of developing, maintaining, and enhancing specific modules of our cloud management platform, ensuring they meet our standards for scalability, efficiency, and reliability. Design and implement serverless applications and event-driven systems that integrate seamlessly with AWS services, driving the platform's innovation forward. Work closely with cross-functional teams to conceptualize, design, and implement advanced features and functionalities that align with our business goals. Utilize your deep expertise in cloud architecture and software development to provide technical guidance and best practices to the engineering team, enhancing the platform's capabilities. Stay ahead of the curve by researching and applying the latest trends and technologies in the cloud industry, incorporating these insights into the development of our platform. Solve complex technical issues, providing advanced support and guidance to both internal teams and external stakeholders. Requirements: A minimum of 5 years of relevant experience in platform or application development, with a strong emphasis on Python and AWS cloud services. Proven expertise in serverless development and event-driven architecture design, with a track record of developing and shipping high-quality SaaS platforms on AWS. Comprehensive understanding of cloud computing concepts, architectural best practices, and AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway. Solid knowledge of object-oriented programming (OOP), SOLID principles, and experience with relational and NoSQL databases. Proficiency in developing and integrating RESTful APIs and familiarity with source control systems like Git. Exceptional problem-solving skills, capable of optimizing complex systems. Excellent communication skills, capable of effectively collaborating with team members and engaging with stakeholders. A strong drive for continuous learning and staying updated with industry developments. Nice to Have: AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications. Experience with the AWS Boto3 SDK for Python. Exposure to other cloud platforms such as Azure or GCP. Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes. Experience: 5 years of relevant experience in platform or application development, with a strong emphasis on Python and AWS cloud services. 1+ years of experience working on applications built using Serverless architecture. 1+ years of hands-on experience with Microservices Architecture in live projects. 1+ years of experience applying Domain-Driven Design principles in projects. 1+ years of experience working with Event-Driven Architecture in real-world applications. 1+ years of experience integrating, consuming, and maintaining AWS services. 1+ years of experience working with Boto3 in Python.
Posted 1 month ago
4.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description : As a Senior Software Engineer - AWS Python at Incedo, you will be responsible for developing and maintaining applications on the Amazon Web Services (AWS) platform. You will be expected to have a strong understanding of Python and AWS technologies, including EC2, S3, RDS, and Lambda. Roles & Responsibilities: Writing high quality code, participating in code reviews, designing systems of varying complexity and scope, and creating high quality documents substantiating the architecture. Engaging with clients, understanding their technical requirements, planning and liaising with other team members to develop technical design & approach to deliver end-to-end solutions. Mentor & guide junior team members, review their code, establish quality gates, build & deploy code using CI/CD pipelines, apply secure coding practices, adopt unit-testing frameworks, provide better coverage, etc. Responsible for teams growth. Technical Skills : Must Have : Python, FAST API, Uvicorn,SQLAlchemy,boto3,Lamdba server less, pymysql Nice to have : AWS lambda, Step functions, ECR, ECS,S3,SNS,SQS, Docker, CICD Proficiency in Python programming language Experience in developing and deploying applications on AWS Knowledge of serverless computing and AWS Lambda Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Nice to have : AWS lambda, Step functions, ECR, ECS,S3,SNS,SQS, Docker, CICD Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 month ago
5.0 - 8.0 years
8 - 12 Lacs
Noida
Work from Office
Automation test engineer with experience using AWS and scripting in Python Knowledge of Boto3 framework is required Test engineer should be able to test Infrastructure provisioned using CDK (created and deleted) and also test the full pipeline; scripts to test the persona (role Experience Required: 5 - 8 Yrs Involves execution of testing, monitoring and operational activities of various complexity based on assigned portfolio ensuring adherences to established service levels and standards. Executes identified test programs for a variety of specializations to support effective testing & monitoring of controls within business groups and across the Bank. Understands the business/group strategy and develops and maintains knowledge of end to end processes. Executes testing activities and any other operational activities within required service level agreements or standards. Develops knowledge related to program and/or area of specialty. Develops and maintains effective relationships with internal & external business partners/stakeholders to execute work and fulfill service delivery expectations. Participates in planning and implementation of operational programs and executes within required service level agreements and standards. Supports change management of varying scope and type; tasks typically focused on execution and sustainment activities. Executes various operational activities/requirements to ensure timely, accurate, and efficient service delivery. Ensures consistent, high quality practices/work and the achievement of business results in alignment with business/group strategies and with productivity goals. Analyzes automated test results and provides initial feedback on test results. Analyzes root causes of any errors discovered to provide for effective communication of issues to appropriate parties. Develops insights and recommends continuous improvement insights based on test results. Creates and maintains adequate monitoring support documentation, such as narratives, flowcharts, process flows, testing summaries, etc. to support the results of the reviews, including the write up of findings/issues for reporting. Mandatory Competencies QE - Test Automation Preparation Beh - Communication QA/QE - QA Automation - Python Data Science and Machine Learning - Data Science and Machine Learning - Python Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate
Posted 1 month ago
3.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 month ago
3.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Role Description Understands the process flow and the impact on the project module outcome. Works on coding assignments for specific technologies basis the project requirements and documentation available Debugs basic software components and identifies code defects. Focusses on building depth in project specific technologies. Expected to develop domain knowledge along with technical skills. Effectively communicate with team members, project managers and clients, as required. A proven high-performer and team-player, with the ability to take the lead on projects. Design and create S3 buckets and folder structures (raw, cleansed_data, output, script, temp-dir, spark-ui) Develop AWS Lambda functions (Python/Boto3) to download Bhav Copy via REST API and ingest into S3 Author and maintain AWS Glue Spark jobs to: partition data by scrip, year and month convert CSV to Parquet with Snappy compression Configure and run AWS Glue Crawlers to populate the Glue Data Catalog Write and optimize AWS Athena SQL queries to generate business-ready datasets Monitor, troubleshoot and tune data workflows for cost and performance Document architecture, code and operational runbooks Collaborate with analytics and downstream teams to understand requirements and deliver SLAs Technical Skills 3+ years hands-on experience with AWS data services (S3, Lambda, Glue, Athena) PostgreSQL basics Proficient in SQL and data partitioning strategies Experience with Parquet file formats and compression techniques (Snappy) Ability to configure Glue Crawlers and manage the AWS Glue Data Catalog Understanding of serverless architecture and best practices in security, encryption and cost control Good documentation, communication and problem-solving skills Nice-to-have skills Qualifications Qualifications 3-5 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred
Posted 1 month ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant- Generative AI - Application development Senior Developer We are looking for a Senior Application Developer to join our product engineering team. This role requires hands-on experience in designing and developing scalable application components with a strong focus on API development, middleware orchestration, and data transformation workflows. You will be responsible for building foundational components that integrate data pipelines, orchestration layers, and user interfaces, enabling next-gen digital and AI-powered experiences. Key Responsibilities: Design, develop, and manage robust APIs and middleware services using Python frameworks like FastAPI and Uvicorn , ensuring scalable and secure access to platform capabilities. Develop end-to-end data transformation workflows and pipelines using LangChain , spacy , tiktoken , presidio-analyzer, and llm -guard, enabling intelligent content and data processing. Implement integration layers and orchestration logic for seamless communication between data sources, services, and UI using technologies like OpenSearch, boto3, requests-aws4auth, and urllib3. Work closely with UI/UX teams to integrate APIs into modern front-end frameworks such as ReactJS, Redux Toolkit, and Material UI. Build configurable modules for ingestion, processing, and output using Python libraries like PyMuPDF , openpyxl , and Unidecode for handling structured and unstructured data. Implement best practices for API security, data privacy, and anonymization using tools like presidio-anonymizer and llm -guard. Drive continuous improvement in performance, scalability, and reliability of the platform architecture. Qualifications we seek in you: Minimum Qualifications Experience in software development in enterprise/ web applications Languages & Frameworks: Python, JavaScript/TypeScript, FastAPI , ReactJS, Redux Toolkit Libraries & Tools: langchain , presidio-analyzer, PyMuPDF , spacy, rake- nltk , inflection, openpyxl , tiktoken APIs & Integration: FastAPI , requests, urllib3, boto3, opensearch-py , requests-aws4auth UI/UX: ReactJS, Material UI, LESS Cloud & DevOps: AWS SDKs, API gateways, logging, and monitoring frameworks (optional experience with serverless is a plus) Preferred Qualifications: Strong understanding of API lifecycle management, REST principles, and microservices. Experience in data transformation, document processing, and middleware architecture. Exposure to AI/ML or Generative AI workflows using LangChain or OpenAI APIs. Prior experience working on secure and compliant systems involving user data. Experience in CI/CD pipelines, containerization (Docker), and cloud-native deployments (AWS preferred). Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 2 months ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
At Velsera, we are committed to revolutionizing the pace of medicine. Established in 2023 by the collaboration of Seven Bridges and Pierian, our primary goal is to expedite the discovery, development, and dissemination of groundbreaking insights that can change lives for the better. We specialize in offering cutting-edge software solutions and professional services that cater to various aspects of the healthcare industry, including: - AI-powered multimodal data harmonization and analytics for drug discovery and development - IVD development, validation, and regulatory approval - Clinical NGS interpretation, reporting, and adoption Headquartered in Boston, MA, we are in a phase of rapid growth, with teams expanding across different countries to meet the increasing demands of our clients. As a Python Developer at Velsera, your responsibilities will include: - Development: Crafting clean, efficient, and well-documented Python code to fulfill project requirements - API Development: Creating RESTful APIs and integrating third-party APIs when necessary - Testing: Composing unit tests and integration tests to ensure high code quality and functionality - Collaboration: Collaborating closely with cross-functional teams to implement new features and enhance existing ones - Code Review: Participating in peer code reviews and offering constructive feedback to team members - Maintenance: Debugging, troubleshooting, and enhancing the existing codebase to boost performance and scalability. Proactively identifying technical debt items and proposing solutions to address them - Documentation: Maintaining detailed and accurate documentation for code, processes, and design - Continuous Improvement: Staying updated with the latest Python libraries, frameworks, and industry best practices. To excel in this role, you should bring: - Experience: A minimum of 6 years of hands-on experience in Python development - Technical Skills: Proficiency in Python 3.x, familiarity with popular Python libraries (e.g., NumPy, pandas, Flask, boto3), experience in developing lambda functions, strong understanding of RESTful web services and APIs, familiarity with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., MongoDB), knowledge of version control systems (e.g., Git), experience with Docker and containerization, experience with AWS services such as ECR, Batch jobs, step functions, cloud watch, etc., and experience with Jenkins is a plus - Problem-Solving Skills: Strong analytical and debugging skills with the ability to troubleshoot complex issues - Soft Skills: Strong written and verbal communication skills, ability to work independently and collaboratively in a team environment, detail-oriented with the capacity to manage multiple tasks and priorities. Preferred skills include experience working in the healthcare or life sciences domain, strong understanding of application security and OWASP best practices, hands-on experience with serverless architectures (e.g., AWS Lambda), proven experience in mentoring junior developers and conducting code reviews. Velsera offers a range of benefits, including a Flexible & Hybrid Work Model to support work-life balance and an Engaging & Fun Work Culture that includes vibrant workplace events, celebrations, and engaging activities to make every workday enjoyable.,
Posted 2 months ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
You have an exciting opportunity to join our team as a Cloud Infrastructure Engineer with a focus on AWS CDK and expertise in Python or TypeScript. In this role, you will be responsible for developing scalable and secure cloud infrastructure components that support modern applications. Whether you excel at scripting with Python or creating CDK constructs with TypeScript, we are looking for individuals who are passionate about infrastructure automation and software engineering. As a Cloud Infrastructure Engineer, your primary responsibilities will include designing, building, and maintaining AWS infrastructure using AWS CDK in Python or TypeScript. You will also be tasked with developing reusable CDK constructs to model various components such as VPCs, Lambda functions, EC2 instances, IAM policies, and more. Additionally, you will automate deployments using CDK CLI, manage dependencies and environments, implement tests for infrastructure code, troubleshoot deployment issues, and collaborate closely with DevOps and Architecture teams to ensure secure and scalable cloud solutions. To succeed in this role, you should possess at least 4+ years of DevOps experience with a strong background in AWS, along with 2+ years of experience working with AWS CDK in Python or TypeScript. Deep knowledge of AWS services such as EC2, Lambda, VPC, S3, IAM, and Security Groups is essential. Experience with AWS CLI, Boto3 (for Python), or Node.js/npm (for TypeScript) is also required. Familiarity with infrastructure test frameworks, CI/CD processes, and troubleshooting CloudFormation templates is a plus. If you have experience with Docker and Kubernetes, exposure to Terraform or multi-IaC environments, and a strong understanding of AWS security, scalability, and cost optimization practices, it would be considered a nice-to-have for this role. Join us in this dynamic opportunity where you can contribute to building cutting-edge cloud infrastructure and be part of a team that values collaboration, innovation, and excellence in cloud solutions.,
Posted 2 months ago
0.0 - 5.0 years
4 - 9 Lacs
Chennai
Remote
Coordinating with development teams to determine application requirements. Writing scalable code using Python programming language. Testing and debugging applications. Developing back-end components. Required Candidate profile Knowledge of Python and related frameworks including Django and Flask. A deep understanding and multi-process architecture and the threading limitations of Python. Perks and benefits Flexible Work Arrangements.
Posted 2 months ago
5.0 - 8.0 years
14 - 22 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Hiring For Top IT Company- Designation: Python Developer Skills: Python + Pyspark Location :Bang/Mumbai Exp: 5-8 yrs Best CTC 9783460933 9549198246 9982845569 7665831761 6377522517 7240017049 Team Converse
Posted 2 months ago
4.0 - 8.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Role: DevOps/SRE Engineer with Python We are looking for a talented and experienced DevOps/Site Reliability Engineer (SRE) with a strong proficiency in Python to join our team at Cloud Raptor. The ideal candidate will be responsible for optimizing our company's production environment and ensuring the reliability and stability of our systems. Key Responsibilities: 1. Collaborate with development teams to design, develop, and maintain infrastructure for our highly available and scalable applications. 2. Automate processes using Python scripting to streamline the deployment and monitoring of our applications. 3. Monitor and manage cloud infrastructure on AWS, including EC2, S3, RDS, and Lambda. 4. Implement and manage CI/CD pipelines for automated testing and deployment of applications. 5. Troubleshoot and resolve production issues, ensuring high availability and performance of our systems. 6. Collaborate with cross-functional teams to ensure security, scalability, and reliability of our infrastructure. 7. Develop and maintain documentation for system configurations, processes, and procedures. Key Requirements: 1. Bachelor's degree in Computer Science, Engineering, or a related field. 2. 3+ years of experience in a DevOps/SRE role, with a strong focus on automation and infrastructure as code. 3. Proficiency in Python scripting for automation and infrastructure management. 4. Hands-on experience with containerization technologies such as Docker and Kubernetes. 5. Strong knowledge of cloud platforms such as AWS, including infrastructure provisioning and management. 6. Experience with monitoring and logging tools such as Prometheus, Grafana, and ELK stack. 7. Knowledge of CI/CD tools like Jenkins or Github Actions. 8. Familiarity with configuration management tools such as Ansible, Puppet, or Chef. 9. Strong problem-solving and troubleshooting skills, with an ability to work in a fast-paced and dynamic environment. 10. Excellent communication and collaboration skills to work effectively with cross-functional teams.
Posted 2 months ago
10.0 - 15.0 years
12 - 16 Lacs
Hyderabad
Work from Office
JD for Data Engineering Lead - Python: Data Engineering Lead with at least 7 to 10 years experience in Python with following AWS Services AWS servicesAWS SQS, AWS MSK, AWS RDS Aurora DB, BOTO 3, API Gateway, and CloudWatch. Providing architectural guidance to the offshore team,7-10, reviewing code and troubleshoot errors. Very strong SQL knowledge is a must, should be able to understand & build complex queries. Familiar with Gitlab( repos and CI/CD pipelines). He/she should be closely working with Virtusa onshore team as well as enterprise architect & other client teams at onsite as needed. Experience in API development using Python is a plus. Experience in building MDM solution is a plus.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |