Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
10 - 20 Lacs
pune, bengaluru
Hybrid
Position: API & Data Integration Engineer Experience: 5-8 Years Location: Gurgaon / Noida / Pune / Bangalore Type: Full-Time Team: Advanced Intelligence Work Group Contact : 9258253740 About the Role: We are seeking an experienced API & Data Integration Engineer to design, build, and maintain backend integrations across internal systems, third-party APIs, and AI copilots. The role involves API development, data pipeline engineering, no-code automation, and cloud-based architecture with a focus on scalability, security, and compliance. Key Responsibilities: Build and maintain RESTful APIs using FastAPI Integrate third-party services (CRMs, SaaS tools) Develop and manage ETL/ELT pipelines for real-time and batch data flows Automate workflows using tools like n8n, Zapier, Make.com Work with AI/ML teams to ensure clean, accessible data Ensure API performance, monitoring, and security Maintain integration documentation and ensure GDPR/CCPA compliance Must-Have Qualifications: 5-8 years in API development, backend, or data integration Strong in Python, with scripting knowledge in JavaScript or Go Experience with PostgreSQL, MongoDB, DynamoDB Hands-on with AWS/Azure/GCP and serverless tools (Lambda, API Gateway) Familiarity with OAuth2, JWT, SAML Proven experience in building and managing data pipelines Comfort with no-code/low-code platforms like n8n and Zapier Nice to Have: Experience with Kafka, RabbitMQ, Kong, Apigee Familiarity with monitoring tools like Datadog, Postman Cloud or integration certifications Tech Stack (Mandate): FastAPI, OpenAPI/Swagger, Python, JavaScript or Go, PostgreSQL, MongoDB, DynamoDB, AWS/Azure/GCP, Lambda, API Gateway, ETL/ELT pipelines, n8n, Zapier, Make.com, OAuth2, JWT, SAML, GDPR, CCPA What We Value: Strong problem-solving and debugging skills Cross-functional collaboration with Data, Product, and AI teams Get Stuff Done attitude
Posted 21 hours ago
4.0 - 9.0 years
9 - 11 Lacs
mumbai
Work from Office
We are hiring for- Role: AWS Infrastructure Engineer Experience: 4+ Years Location: Andheri East, Mumbai Work Mode: WFO Required skills: Hands-on experience with multi-cloud environments (e.g., Azure, AWS, GCP) Design and maintain AWS infrastructure (EC2, S3, VPC, RDS, IAM, Lambda and other AWS services). Implement security best practices (IAM, GuardDuty, Security Hub, WAF). Configure and troubleshoot AWS networking and hybrid and url filtering solutions (VPC, TGW, Route 53, VPNs, Direct Connect). Experience managing physical firewall management (palo alto , cisco etc..) Manage , troubleshoot, Configure and optimize services like Apache, NGINX, and MySQL/PostgreSQL on Linux/Windows/ Ensure Linux/Windows server compliance with patch management and security updates. Provide L2/L3 support for Linux and Windows systems, ensuring minimal downtime and quick resolution of incidents Collaborate with DevOps, application, and database teams to ensure seamless integration of infrastructure solutions Automate tasks using Terraform, CloudFormation, or scripting (Bash, Python). Monitor and optimize cloud resources using CloudWatch, Trusted Advisor, and Cost Explorer Requirements: 4+ years of AWS experience and system administration in Linux & Windows. Proficiency in AWS networking, security, and automation tools. Certifications: AWS Solutions Architect (required), RHCSA/MCSE (preferred). Strong communication and problem-solving skills webserver - apache2 , nginx , IIS, OS- ubuntu , windows server certification : AWS solution architected associate level Interview process in 1 day F2F interview at Andheri East office. Please don't ask for virtual interview. Candidates who are immediate joiner & ready to come for F2F discussion are preferred.
Posted 3 days ago
4.0 - 6.0 years
12 - 15 Lacs
noida
Work from Office
We are seeking an experienced DevOps Engineer to join our engineering organization to design, operate, and scale the infrastructure that supports our production services. The successful candidate will combine strong hands-on AWS and Kubernetes experience with a rigorous approach to automation, reliability, security, and operational excellence in a fast-paced, agile environment. Key Responsibilities Ensure production services are highly available and reliable, with proactive monitoring and incident management to support 247 operations. Design, implement, and operate secure, resilient AWS infrastructure using EKS, ECS, Lambda, S3, RDS, Route 53, VPC, EC2, CloudFormation, SNS, and Secrets Manager . Operate and optimize Kubernetes clusters and containerized workloads; implement best practices for scaling, resource utilization, and upgrades. Build, maintain, and improve CI/CD pipelines (preferably GitHub Actions ) to automate builds, tests, and deployments with safe rollback strategies. Own deployment activities for microservices and distributed applications; collaborate with engineering teams on release coordination and deployment automation. Drive infrastructure automation, configuration-as-code, and IaC best practices; maintain and evolve CloudFormation templates and related tooling. Implement and maintain observability: logging, metrics, alerting, and dashboards to support SLO/SLI objectives. Conduct vulnerability assessments and coordinate remediation with engineering and security teams; participate in incident response and post-incident reviews. Enforce and administer access controls; handle onboarding/offboarding and secret management with least-privilege principles. Produce and maintain runbooks, operational procedures, and release documentation. Support security and compliance initiatives; participation in ISO 20000 or SOC 2 efforts is an advantage. Mentor and assist new team members on operational practices, access processes, and onboarding. Required Qualifications (Must-have) Bachelors or Masters degree in Computer Science, Information Technology, Engineering , or a closely related field, or equivalent practical experience. 4+ years of DevOps experience with hands-on expertise in designing, implementing, and operating secure, resilient AWS infrastructure using AWS services (EKS, ECS, Lambda, S3, RDS, Route53, VPC, CloudFormation, EC2, SNS, Secrets Manager, etc. Strong operational experience with Kubernetes (cluster management, networking, storage, upgrades). Experience implementing CI/CD pipelines using GitHub Actions or equivalent tooling. Proficiency with shell scripting and experience in at least one programming language (e.g., Python, Go, Java). Demonstrated experience supporting deployments for scalable production systems . Solid understanding of computer networking fundamentals DNS, domains, firewalls, SSL/TLS certificate management, and encryption. Experience working in an Agile development environment. Strong documentation and communication skills; ability to produce clear runbooks and release notes. Proficiency with Terraform and Ansible for automated deployments Preferred Qualifications (Good-to-have) Well-versed in AI-assisted developer tools (for example: GitHub Copilot, ChatGPT, and similar platforms) and experienced using these tools to accelerate code generation, create infrastructure templates, author runbooks, automate repetitive tasks, and support faster troubleshooting. Awareness of DDoS mitigation strategies and network protection controls. Experience with vulnerability management programs and formal incident management processes. Operational experience with Elasticsearch at scale. Prior work with microservice deployment patterns and distributed application environments. Experience participating in or implementing ISO 20000 / SOC 2 compliance activities. Hands-on experience managing distributed application environments at scale. Personal Attributes Strong ownership mentality and pragmatic problem-solver. Detail-oriented with a bias for automation and repeatability. Effective collaborator who can communicate operational and security concerns to engineering and business stakeholders. Comfortable working under pressure and participating in on-call rotations About vConstruct: vConstruct specializes in providing high quality Building Information Modeling, Lean Construction, Integrated Project Delivery and Software Solutions for global construction projects. vConstruct is a wholly owned subsidiary of DPR Construction, USA. The Software Unit for vConstruct works on many interesting problems in Construction Technology. It works across different set of software technologies such as Web Applications, Workflow Management, Analytics, Machine Learning and IoT. For more information, please visit www.vconstruct.com. DPR Construction is a commercial general contractor and construction manager in USA specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great thingsgreat teams, great buildings, great relationships—DPR is a truly great company. For more information, please visit www.dpr.com.
Posted 3 days ago
5.0 - 9.0 years
0 - 1 Lacs
noida
Remote
Roles and Responsibilities: Deliver technical solutions with robust written, well tested code. Advocate and advance modern, agile software development practices and help develop and evangelise engineering and organisational practices, Contribute to growing a healthy, collaborative engineering culture in line with the organizations defined values. Be an active part of the engineering community team and collaborate with other technical leads. Identifying and resolving technical debt An ability to balance business deliverables and technical excellence. Document and share knowledge across teams where required. Skillset Needed: Spring Boot, MySQL/Aurora, Spring Cloud, Spring Data, Java 8-11, Junit, Spring Integration test. Candidates should have a strong understanding of Agile Scrum Methodology. Strong working knowledge of RESTful APIs. Experience working using AWS Technologies (Kubernetes, Lambda, Elastic search etc.,), Docker. Developing in a Micro services Architecture Style. Good knowledge of software development methodologies and techniques. Develop maintainable and supportable code: clean, reusable code thats easy to read and test. Quality oriented; understands software testing, writes tests where appropriate and the concept of test/behaviour driven development. Strong problem-solving capability; using appropriate debugging tools. Experience: 5 + Years of Experience Organization : This is a direct job with iMEGH Private Limited (India) Fluid. Live is hiring partner for iMEGH Private Limited
Posted 5 days ago
5.0 - 10.0 years
18 - 33 Lacs
japan, chennai
Work from Office
C1X AdTech Pvt Ltd is a fast-growing product and engineering-driven AdTech company building next-generation advertising and marketing technology platforms. Our mission is to empower enterprise clients with the smartest marketing solutions, enabling seamless integration with personalization engines and delivering cross-channel marketing capabilities. We are dedicated to enhancing customer engagement and experiences while focusing on increasing Lifetime Value (LTV) through consistent messaging across all channels.Our engineering team spans front end (UI), back end (Java/Node.js APIs), Big Data, and DevOps , working together to deliver scalable, high-performance products for the digital advertising ecosystem. Role Overview As a Data Engineer , you will be a key member of our data engineering team, responsible for building and maintaining large-scale data products and infrastructure. Youll shape the next generation of data analytics tech stack by leveraging modern big data technologies. This role involves working closely with business stakeholders, product managers, and engineering teams to meet diverse data requirements that drive business insights and product innovation. Objectives Design, build, and maintain scalable data infrastructure for collection, storage, and processing. Enable easy access to reliable data for data scientists, analysts, and business users. Support data-driven decision-making and improve organizational efficiency through high-quality data products. Responsibilities Build large-scale batch and real-time data pipelines using frameworks like Apache Spark on AWS or GCP. Design, manage, and automate data flows between multiple data sources. Implement best practices for continuous integration, testing, and data quality assurance . Maintain data documentation, definitions, and governance practices. Optimize performance, scalability, and cost-effectiveness of data systems. Collaborate with stakeholders to translate business needs into data-driven solutions. Qualifications Bachelor’s degree in Computer Science, Engineering, or related field (exceptional coding performance on platforms like LeetCode/HackerRank may substitute). 2+ years’ experience working on full lifecycle Big Data projects. Strong foundation in data structures, algorithms, and software design principles . Proficiency in at least two programming languages – Python or Scala preferred. Experience with AWS services such as EMR, Lambda, S3, DynamoDB (GCP equivalents also relevant). Hands-on experience with Databricks Notebooks and Jobs API. Strong expertise in big data frameworks: Spark, MapReduce, Hadoop, Sqoop, Hive, HDFS, Airflow, Zookeeper . Familiarity with containerization (Docker) and workflow management tools (Apache Airflow) . Intermediate to advanced knowledge of SQL (relational + NoSQL databases like Postgres, MySQL, Redshift, Redis). Experience with SQL tuning, schema design, and analytical programming . Proficient in Git (version control) and collaborative workflows. Comfortable working across diverse technologies in a fast-paced, results-oriented environment .
Posted 6 days ago
3.0 - 8.0 years
15 - 30 Lacs
bengaluru, delhi / ncr, mumbai (all areas)
Hybrid
Were Hiring: Full Stack Developers (50+ Openings | 3-20 Yrs Exp | Multiple Levels) Location: Mumbai / Pune / Chandigarh / Bengaluru / Gurugram / Noida / New Delhi (WFO, Hybrid, or Remote Based on Role) Industry: Fortune 500 Client Projects (Staffing via Hatchtra Innotech Pvt. Ltd.) Employment Type: Full-Time | Contract (C2H) About us: Hatchtra is a leading staffing and workforce solutions company, trusted by Fortune 500 organizations and global enterprises to build their technology teams. When you join us, youll work directly with our Fortune 500 client teams that power mission-critical systems worldwide. About the Role We are seeking Full Stack JavaScript Developers (3-20 years) with expertise in Angular, React, Node.js, and Express to join global enterprise projects. In this role, youll design, develop, and deploy scalable web applications and APIs, working closely with cross-functional teams to deliver high-performance software solutions. Whether youre an engineer passionate about coding or an architect leading large-scale systems, this is your chance to work on cutting-edge web technologies in enterprise environments. Open Positions & Designations Full Stack Developer / Frontend Engineer (3-5 Yrs) Senior Full Stack Developer (5-10 Yrs) Lead Full Stack Engineer / Solution Developer (8-12 Yrs) Technical Architect / Engineering Lead (10-15 Yrs) Director of Engineering / Head of Technology (15-20 Yrs) Key Responsibilities (Scale with Level) Professional (3-5 Yrs) Build and maintain web applications using Angular, React, Node.js, and Express . Design responsive UIs and reusable front-end components. Write RESTful APIs and integrate with backend services. Implement unit testing and follow coding best practices. Debug, troubleshoot, and optimize application performance. Collaborate with QA, designers, and product managers. Mid-Level (5-10 Yrs) Lead full stack development efforts for enterprise applications. Architect scalable solutions with microservices and API-driven design . Implement CI/CD pipelines and DevOps best practices. Optimize frontend performance and backend architecture. Mentor junior developers and perform code reviews. Collaborate with product and business teams to refine technical requirements. Senior/Leadership (10-20 Yrs) Drive engineering excellence and define frontend/backend technology strategy . Lead architecture decisions for large-scale cloud-based systems . Oversee code quality, scalability, and security best practices. Build and lead global engineering teams and delivery processes. Evaluate emerging JavaScript frameworks and tools for innovation. Partner with stakeholders to align technology with business goals. Skills & Tools Core: JavaScript (ES6+), TypeScript, Angular, React, Node.js, Express.js Frontend: HTML5, CSS3, SASS/LESS, Material UI, Tailwind CSS Backend: RESTful APIs, GraphQL, Express.js, Microservices Architecture Databases: MongoDB, PostgreSQL, MySQL, Redis Cloud & DevOps: AWS, Azure, GCP, Docker, Kubernetes, CI/CD (Jenkins, GitHub Actions) Testing: Jest, Mocha, Jasmine, Cypress Version Control & Tools: Git, Jira, Bitbucket, GitHub Preferred Certifications: AWS Certified Developer, Microsoft Azure Developer Associate Qualifications Bachelors/Master’s in Computer Science, Engineering, or related field. 3+ years of experience in full stack web development roles. Strong proficiency in JavaScript, TypeScript, Angular, React, Node.js, Express . Hands-on experience in RESTful API design and scalable web app development. For senior roles: Proven track record in architecture, leadership, and mentoring . Why Join Us? Work on enterprise-scale, mission-critical applications for Fortune 500 clients. Exposure to modern frontend, backend, and cloud-native technologies . Global career growth and opportunities to lead engineering teams. Competitive compensation, learning opportunities, and flexible work culture. How to Apply For quick consideration, please email your resume and include the desired position and experience level (e.g., “Data Engineer – Mid-Level”) in the subject line.
Posted 6 days ago
2.0 - 4.0 years
2 - 4 Lacs
mumbai, maharashtra, india
On-site
Requirements: 1. Strong hands on experience with big data technologies like Hadoop, preferably in AWS (Elastic Map Reduce), Athena, Glue, Spark, HDFS, DFS, Zookeeper, Cluster Awareness, EC2, Labmda 2. Must have the hands on over the Python, Scala, R, Hql. 3. ETL tools such as Airflow, Ooze, Sqoop, KAFKA, Ksql. 4. Strong hands on experience designing data models 5. Proficient database skills in Microsoft SQL Server, Postgres and MySQL, AWS Aurora, RDS 6. Proven ability to debug existing code quickly, and point to areas of optimisation 7. Proven ability monitoring production systems 8. Decent scripting knowledge in any programming language 9. Experience in leading a team. Good to have: 1. Experience of caching technologies like memcached 2. Knowledge of a NoSQL system: redis or dynamodb etc 3. Knowledge of AWS storage technologies like S3, EMRFS, Aurora,Nitro etc.
Posted 1 week ago
6.0 - 11.0 years
17 - 32 Lacs
chennai
Work from Office
Roles and Responsibilities: Design, develop, and maintain scalable ETL pipelines using PySpark and Python for large-scale data processing. Work closely with data architects and analysts to understand business requirements and translate them into technical solutions. Develop and optimize SQL queries for data extraction, transformation, and loading from various data sources. Integrate data from multiple sources (structured and unstructured) into data lakes or data warehouses on AWS. Manage data ingestion from real-time and batch sources, including Kafka, S3, and relational databases. Develop and deploy AWS Lambda functions for event-driven processing and automation tasks. Ensure data quality, integrity, and consistency throughout the ETL lifecycle. Monitor ETL job performance, troubleshoot failures, and optimize pipeline execution time. Implement error handling, logging, and alerting mechanisms to ensure pipeline reliability. Collaborate with DevOps teams to automate deployments using CI/CD tools. Maintain documentation for ETL processes, data models, and system architecture. Ensure adherence to data security and compliance standards across all ETL operations
Posted 1 week ago
10.0 - 20.0 years
15 - 30 Lacs
noida, gurugram, delhi / ncr
Hybrid
Role & responsibilities Skill - Data engineer python AWS Exp - 10+ Location - Gurugram NP - only immediate joiners needed Preferred candidate profile We are seeking an experienced Lead Data Engineer with strong expertise in Python, AWS cloud services, ETL pipelines, and system integrations. The ideal candidate will lead the design, development, and optimization of scalable data solutions and ensure seamless API and data integrations across systems. You will collaborate with cross-functional teams to implement robust DataOps and CI/CD pipelines. Key Responsibilities: Responsible for implementation of scalable, secure, and high-performance data pipelines. Design and develop ETL processes using AWS services (Lambda, S3, Glue, Step Functions, etc.). Own and enhance API design and integrations for internal and external data systems. Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions. Drive DataOps practices for automation, monitoring, logging, testing, and continuous deployment. Develop CI/CD pipelines for automated deployment of data solutions. Conduct code reviews and mentor junior engineers in best practices for data engineering and cloud development. Ensure compliance with data governance, security, and privacy policies. Required Skills & Experience: 10+ years of experience in data engineering, software development, or related fields. Strong programming skills in Python for building robust data applications. Expert knowledge of AWS services, particularly Lambda, S3, Glue, CloudWatch, and Step Functions. Proven experience designing and managing ETL pipelines for large-scale data processing. Experience with API design, RESTful services, and API integration workflows. Deep understanding of DataOps practices and principles. Hands-on experience implementing CI/CD pipelines (e.g., using CodePipeline, Jenkins, GitHub Actions). Familiarity with containerization tools like Docker and orchestration tools like ECS/EKS (optional but preferred). Strong understanding of data modeling, data warehousing concepts, and performance optimization.
Posted 1 week ago
5.0 - 7.0 years
22 - 25 Lacs
pune, chennai, bengaluru
Work from Office
EmploymentTime Were Hiring: Backend Engineer (Python strong expertise) Are you a backend expert who loves building scalable and reliable systems. We are looking for a Backend Engineer to join our team on a priority basis! Core Skills: Python – strong expertise AWS Services – Step Functions, Lambda, OpenSearch, S3, Databricks Terraform – infrastructure as code (preferred) Git – proficiency in version control Independent working style with excellent communication skills Location: Multiple Locations across India (Bengaluru, Coimbatore, Chennai, Gurugram/Noida, Hyderabad, Kochi, Kolkata, Mumbai, Pune, Trivandrum) Experience Level: 5+ years Employmnet Type: Full TIme If you’re excited about working on challenging projects and want to grow in a dynamic environment, we’d love to hear from you! Interested candidates can DM me or share resumes at arvind.valmiki@orcanexgen.com For a quicker response, please share your resume and details via WhatsApp at +91 9966990656
Posted 1 week ago
6.0 - 10.0 years
15 - 20 Lacs
bengaluru
Work from Office
Role & responsibilities Design, develop, and deploy scalable applications using Java and AWS services. Build and maintain serverless architectures using AWS Lambda. Manage and optimize storage solutions with AWS S3. Develop and integrate RESTful APIs and Java-based APIs. Collaborate with cross-functional teams to deliver robust, secure, and high-performance solutions. Ensure adherence to best practices in coding, testing, and deployment. Preferred candidate profile Strong programming skills in Java. Hands-on experience with AWS services including Lambda and S3. Proficiency in RESTful API development. Good understanding of cloud architecture and best practices. Strong problem-solving and communication skills. 6+ years of full-time experience. PF (Provident Fund) account. Face-To-face Interview Preferred candidate profile
Posted 3 weeks ago
5.0 - 10.0 years
11 - 16 Lacs
Noida, Ghaziabad, Greater Noida
Work from Office
Role & responsibilities 5+ years of experience in cloud operations/support, with at least 3+ years on AWS. • Strong verbal and written communication skills for technical documentation, incident communication, and customer interactions. • Expertise in AWS services like EC2, RDS, S3, Lambda, CloudWatch, IAM, Auto Scaling, ALB/NLB, Route 53, CloudTrail, Config. • Strong troubleshooting skills across Linux/Windows OS, networking (VPC, subnets, NACLs, security groups), and storage. • Experience with Infrastructure as Code (IaC) using CloudFormation and/or Terraform. • Working knowledge of ITIL processes incident, change, and problem management. • Hands-on experience with monitoring and logging tools (e.g., CloudWatch, Prometheus, ELK, Datadog). • Familiarity with backup solutions, patch management, and cloud cost governance. 2 Desirables: Adds value but not mandatory: • Exposure to DevOps toolchains (CI/CD, Git Ops, Jenkins, Code Pipeline). • Experience in scripting (Python, Bash, or PowerShell). • Exposure to container management (ECS/EKS) and serverless architecture. • Knowledge of cloud security and compliance frameworks (CIS, HIPAA, ISO27001, etc.). • Experience in managing multi-account landing zones and AWS Control Tower. 3 Project Essentials Critical for delivering AWS projects successfully: • Should have led or been a core part of AWS Managed Services delivery for enterprise customers. • Must have experience in transitioning AWS environments from build/deployment to steady-state support. • Should be capable of writing operational runbooks, SOPs, and automation scripts. • Experience working in 24x7 support environment with on-call rotation. Qualification & Certification 1. Educations: • Bachelors or Master’s degree in Computer Science, IT, or a related field. Doc No: VSPL/TPL/MRF/Ver 1.0 2. Certifications: • AWS Certified Solutions Architect / DevOps Engineer / SysOps Administrator. Key Result Area • Operational Excellence & SLA Adherence • Ensure resolution of incidents, problems, and changes within defined SLAs for all assigned AWS environments. • Team & Knowledge Leadership • Mentor L1/L2 team, review runbooks, and ensure upskilling and operational readiness through SOPs and training sessions. • Customer Satisfaction & Service Improvement • Maintain high CSAT through proactive communication, regular reporting, and continuous improvement in cloud operations. Preferred candidate profile Must Have Cloud AWS Exp. Good in team handling Exp . Good in Cloud Operations/Support. If Intrested Please share your updated cv on karishma.tripathi@velocis.co.in
Posted 1 month ago
10.0 - 13.0 years
40 - 65 Lacs
Bengaluru
Hybrid
Job Role: Tech Engineering Manager Location: Embassy Tech Village Employment Type: Full-time Mode : Hybrid ( 3 days work from office Mandatory) Experience Level: 10-13 Years Preferred candidate profile Prior experience building and leading development for internal platforms, logistics tools, or operational systems . Strong engineering background in Java , .NET , or both, with hands-on design and architecture experience. Proven leadership in managing and mentoring engineers across varying levels of seniority. Deep knowledge in designing and maintaining high-availability, distributed systems . Hands-on experience with cloud platforms such as AWS and/or OCI , including services like EC2, S3, RDS, Lambda, and VPC . Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation), CI/CD pipelines , and observability tools (e.g., Grafana, DataDog, Prometheus). Strong understanding of containerization and orchestration using Docker and Kubernetes . Solid knowledge of cloud-native design principles and DevOps best practices . Experience developing service specifications, data models , and managing cloud migration projects. Excellent communication and collaboration skills with the ability to align technical direction with business goals. Passion for engineering excellence , innovation, and solving real-world problems through technology.
Posted 1 month ago
10.0 - 20.0 years
20 - 35 Lacs
Noida, Gurugram, Delhi / NCR
Hybrid
Role & responsibilities Skill - Lead Data Engineer, Python, AWS Glue, API, fast API, rest API, ETL, SQL Location - Gurugram Exp - 10+ Yrs. Notice period - only immediate joiners needed Job Descriptio n Job Summary: We are seeking an experienced Lead Data Engineer with strong expertise in Python, AWS cloud services, ETL pipelines, and system integrations. The ideal candidate will lead the design, development, and optimization of scalable data solutions and ensure seamless API and data integrations across systems. You will collaborate with cross-functional teams to implement robust DataOps and CI/CD pipelines. Key Responsibilities: Responsible for implementation of scalable, secure, and high-performance data pipelines. Design and develop ETL processes using AWS services (Lambda, S3, Glue, Step Functions, etc.). Own and enhance API design and integrations for internal and external data systems. Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions. Drive DataOps practices for automation, monitoring, logging, testing, and continuous deployment. Develop CI/CD pipelines for automated deployment of data solutions. Conduct code reviews and mentor junior engineers in best practices for data engineering and cloud development. Ensure compliance with data governance, security, and privacy policies. Required Skills & Experience: 10+ years of experience in data engineering, software development, or related fields. Strong programming skills in Python for building robust data applications. Expert knowledge of AWS services, particularly Lambda, S3, Glue, CloudWatch, and Step Functions. Proven experience designing and managing ETL pipelines for large-scale data processing. Experience with API design, RESTful services, and API integration workflows. Deep understanding of DataOps practices and principles. Hands-on experience implementing CI/CD pipelines (e.g., using CodePipeline, Jenkins, GitHub Actions). Familiarity with containerization tools like Docker and orchestration tools like ECS/EKS (optional but preferred). Strong understanding of data modeling, data warehousing concepts, and performance optimization.
Posted 2 months ago
12.0 - 17.0 years
20 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities Job Description: We are seeking a highly skilled AWS DevOps Hands-on Architect with deep expertise in composable architecture and extensive experience in migrating on-prem monolithic applications to AWS-based composable solutions . The ideal candidate will be responsible for designing, implementing, and maintaining highly scalable, resilient, and secure cloud-native architectures while ensuring seamless integration with on-premises infrastructure. Key Responsibilities: Implement and maintain Terraform and Terragrunt with Monorepo on GitHub Establish and maintain CI/CD pipelines using tools like GitHub Actions, NX. Integrate the pipeline with Unit Testing, Code Quality, Automation Test suite etc. Manage AWS Organizations, IAM roles & policies, VPC configurations , and ensure security best practices are followed. Automate infrastructure provisioning and configuration management using AWS CloudFormation and Terraform . Support the infra-architecture for composable commerce solutions leveraging AWS services including Lambda, DynamoDB, GraphQL, Redis, API Gateway, CDN, and multiple AWS regions . Work closely with development teams using React/Node.js to ensure seamless API integrations and operational efficiency. Design and execute migration strategies for transitioning on-prem monolithic applications to a modular, cloud-native architecture. Implement robust monitoring and observability solutions utilizing AWS-native tools and best practices. Optimize system performance, scalability, and reliability across distributed environments. Ensure high availability and fault tolerance across multiple AWS regions . Support hybrid cloud environments, enabling seamless on-prem and cloud integration . Required Skills & Qualifications: 10 12 years of experience in cloud architecture & DevOps with a focus on AWS. Strong expertise in composable architecture , serverless computing , and API-driven solutions . Proven experience in migrating on-prem monolithic applications to cloud-native composable architecture . Proficiency in GitHub, GitHub Actions, NX, SonarQube for CI/CD automation. Hands-on experience in monitoring and observability solutions within AWS. Deep knowledge of AWS networking, security, IAM, VPCs, and multi-region deployments . Strong experience with AWS CloudFormation, Terraform , Terragrunt, and infrastructure-as-code best practices. Experience with React/Node.js and microservices-based development. Knowledge of Redis, GraphQL, API Gateway , SQN, SNS, and event-driven architectures. Understanding of CDN strategies for content delivery and performance optimization. Experience integrating AWS with on-prem systems and hybrid cloud environments . Good to Have: Experience in Jenkins for CI/CD pipeline automation. Understanding of online retail application development and e-commerce platforms. Knowledge of Windows OS and Linux environments. Familiarity with composable commerce solutions such as BigCommerce, CommerceTools, or similar . Why Join Us? Work on cutting-edge cloud-native architectures with industry-leading technologies. Be a key part of a large-scale digital transformation journey . Collaborate with highly skilled engineers in an agile and innovative environment. Competitive salary, benefits, and career growth opportunities
Posted 2 months ago
2.0 - 5.0 years
0 - 1 Lacs
Pune
Remote
Dynatrace (On-prem & SaaS) Python Coding SLI/SLO/SLA – setup, tracking, reporting Open Telemetry & Instrumentation: Knowledge of logging, tracing, metrics collection AWS Services: CloudWatch, X-Ray, Lambda Red Hat OpenShift on AWS Grafana
Posted 2 months ago
5.0 - 10.0 years
8 - 14 Lacs
Bengaluru
Work from Office
lInformation Architecture SET ART DJANGO Framework,PYthon.docker system archi IAM concept &protocols (OAAuth2,SAML) Agile/scrum,AWS Cloud service(IAM,lambda )FastAPI, CI/CD,AWS cloud,rest API design, Monitoring & logging(cloud watch,ELK) 9140679821
Posted 2 months ago
3.0 - 5.0 years
7 - 13 Lacs
New Delhi, Gurugram, Delhi / NCR
Hybrid
Title: Infrastructure Engineer Location: Gurugram, Haryana Company: Morningstar is a leading provider of independent investment research in North America, Europe, Australia, and Asia. We offer a wide variety of products and solutions that serve market participants of all kinds, including individual and institutional investors in public and private capital markets, financial advisors, asset managers, retirement plan providers and sponsors, and issuers of securities. Morningstar India has been a Great Place to Work-certified company for the past eight consecutive years. Role: As a Infrastructure Engineer, you will be at the forefront of deploying, and maintaining the core infrastructure that powers the organizations technology landscape. This role requires a strategic thinker with hands-on expertise in infrastructure technologies, a strong grasp of project execution, and the ability to communicate cross-functional efforts. You'll ensure that systems are resilient, secure, scalable, and high-performing, while driving innovation and efficiency. Shift: General Responsibilities: • Infrastructure Design & Architecture o Design and maintain robust, scalable, and secure infrastructure solutions that align with business goals. o Partner with cross-functional teams to gather infrastructure requirements and recommend optimal solutions. • System Implementation & Operations o Deploy, configure, and manage infrastructure components including compute, storage, networking, and virtualization platforms. o Monitor infrastructure health and performance, troubleshoot issues, and optimize systems for peak efficiency. • Team Collaboration o Work closely with DevOps, Security, and Development teams to ensure seamless delivery of infrastructure services. • Security & Compliance o Implement infrastructure security best practices, patch management, and hardening techniques. o Support compliance initiatives and participate in internal and external security audits. • Project Management o Maintain documentation and drive continuous communication among stakeholders. • Automation & Innovation o Drive automation of infrastructure provisioning and management using tools like Terraform, Ansible, or similar. o Stay current with emerging infrastructure trends and recommend improvements or adoptions that drive efficiency. • Disaster Recovery & Business Continuity o Design, implement, and regularly test disaster recovery and backup strategies to ensure system resiliency. o Maintain and improve business continuity plans to minimize downtime and data loss. Qualifications: • 3-5 years of relevant professional experience in Infrastructure and Cloud services • Strong hands-on experience with AWS services including EC2, S3, IAM, Route 53, Lambda, Kinesis, ElastiCache, DynamoDB, Aurora, and Elasticsearch. • Proficient in infrastructure automation using Terraform, AWS CloudFormation. • Hands-on experience with configuration management tools such as Ansible, Chef, or Puppet. • Strong working knowledge of CI/CD tools, particularly Jenkins. • Proficient with Git and other version control systems. • Hands-on experience with Docker and container orchestration using Amazon EKS or Kubernetes. • Proficient in scripting with Bash and PowerShell. • Solid experience with both Linux and Windows Server administration. • Experience setting up monitoring, logging, and alerting solutions using tools like CloudWatch, Nagios, etc. • Working knowledge of Python and the AWS Boto3 SDK. Morningstar is an equal opportunity employer
Posted 2 months ago
6.0 - 11.0 years
8 - 15 Lacs
Pune
Hybrid
- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 7+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - Research and evaluate new technologies
Posted 2 months ago
5.0 - 10.0 years
22 - 37 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Experience: 5-8 Years (Lead-23 LPA), 8-10 Years (Senior Lead 35 LPA), 10+ Years (Architect- 42 LPA)- Max Location : Bangalore as 1 st preference , We can also go for Hyderabad, Chennai, Pune, Gurgaon Notice: Immediate to max 15 Days Joiner Mode of Work: Hybrid Job Description: Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities: Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.
Posted 2 months ago
6.0 - 11.0 years
8 - 18 Lacs
Hyderabad
Work from Office
Mandatory Skills: .Net Core, AWS Cloud, Angular 10 or above Key Responsibilities: Full Stack Development: Design, develop, and maintain web applications using .NET technologies, including C#, .NET Core and ASP.NET Web API. Build and maintain front-end applications using Angular 10+ versions. Implement responsive and user-friendly UI features, ensuring seamless user experience across devices. Cloud Development and Management: Utilize AWS services (S3, Lambda, EC2, CloudWatch) for hosting, deployment, and monitoring of applications. Work with AWS services for automation, infrastructure management, and scaling solutions. Backend Development & API Design: Develop robust backend APIs using .NET 4.6.1 / .NET Core 3, ensuring high performance and security. Integrate third-party APIs and services into applications, ensuring scalability and reliability. Code Quality & CI/CD: Implement best practices for code quality and standards. Use tools like SonarQube to ensure the code is free from errors and maintain high-quality standards. Work with Jenkins for continuous integration and continuous deployment (CI/CD), ensuring smooth deployments and minimal downtime. http://qentelli.com Quality Intelligence through Engineering Qentelli2024 Collaboration and Agile Practices: Collaborate effectively with cross-functional teams, including designers, product managers, and other developers. Use Agile methodologies for efficient development, and actively participate in sprint planning, stand-ups, and retrospectives. Track and manage tasks using Jira, ensuring all tasks are completed on time and according to project requirements. Version Control & Docker: Manage source code and collaborate with the team using Git for version control. Use Docker for containerization and deployment, ensuring consistent environments across development, staging, and production. Required Skills & Qualifications: Experience in Full Stack Development: Proven experience as a full-stack developer using .NET technologies (e.g., .NET 4.6.1, .NET Core 3, ASP.NET Web API 2). Frontend Technologies: Strong hands-on experience with Angular 10+ and other front-end technologies. Cloud Technologies: Hands-on experience working with AWS services such as S3, Lambda, CloudWatch, and EC2. CI/CD & Code Quality: Experience with tools like Jenkins, SonarQube, and other DevOps practices. Version Control & Collaboration Tools: Experience using Git for version control and Jira for task tracking. Containerization: Knowledge of Docker for creating and managing containerized applications. Strong Problem-Solving Skills: Ability to troubleshoot, debug, and optimize both front-end and back-end issues. Team Player: Strong communication skills and the ability to collaborate effectively in a team- oriented environment. Preferred Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. Experience with other cloud platforms or services is a plus. Familiarity with Agile methodologies and Scrum practices. Familiarity with additional tools such as Kubernetes, Terraform, or other infrastructure automation tools.
Posted 2 months ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad
Work from Office
JD: .Net AWS Lead Overall Exp.: Developer: 8+ years Rel. Exp: Developer: 4+ years Work Location: Hyderabad Work Timing: General Shift Work Mode: WFO Position Type: Permanent (Fulltime) Notice Period: 0-30 days Mandatory Skills: .Net Core, AWS Cloud, Angular 10 or above Key Responsibilities: Full Stack Development: Design, develop, and maintain web applications using .NET technologies, including C#, .NET Core and ASP.NET Web API. Build and maintain front-end applications using Angular 10+ versions. Implement responsive and user-friendly UI features, ensuring seamless user experience across devices. Cloud Development and Management: Utilize AWS services (S3, Lambda, EC2, CloudWatch) for hosting, deployment, and monitoring of applications. Work with AWS services for automation, infrastructure management, and scaling solutions. Backend Development & API Design: Develop robust backend APIs using .NET 4.6.1 / .NET Core 3, ensuring high performance and security. Integrate third-party APIs and services into applications, ensuring scalability and reliability. Code Quality & CI/CD: Implement best practices for code quality and standards. Use tools like SonarQube to ensure the code is free from errors and maintain high-quality standards. Work with Jenkins for continuous integration and continuous deployment (CI/CD), ensuring smooth deployments and minimal downtime. Collaboration and Agile Practices: http://qentelli.com Quality Intelligence through Engineering Qentelli2024 Collaborate effectively with cross-functional teams, including designers, product managers, and other developers. Use Agile methodologies for efficient development, and actively participate in sprint planning, stand-ups, and retrospectives. Track and manage tasks using Jira, ensuring all tasks are completed on time and according to project requirements. Version Control & Docker: Manage source code and collaborate with the team using Git for version control. Use Docker for containerization and deployment, ensuring consistent environments across development, staging, and production. Required Skills & Qualifications: Experience in Full Stack Development: Proven experience as a full-stack developer using .NET technologies (e.g., .NET 4.6.1, .NET Core 3, ASP.NET Web API 2). Frontend Technologies: Strong hands-on experience with Angular 10+ and other front-end technologies. Cloud Technologies: Hands-on experience working with AWS services such as S3, Lambda, CloudWatch, and EC2. CI/CD & Code Quality: Experience with tools like Jenkins, SonarQube, and other DevOps practices. Version Control & Collaboration Tools: Experience using Git for version control and Jira for task tracking. Containerization: Knowledge of Docker for creating and managing containerized applications. Strong Problem-Solving Skills: Ability to troubleshoot, debug, and optimize both front-end and back-end issues. Team Player: Strong communication skills and the ability to collaborate effectively in a team- oriented environment. Preferred Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. Experience with other cloud platforms or services is a plus. Familiarity with Agile methodologies and Scrum practices. Familiarity with additional tools such as Kubernetes, Terraform, or other infrastructure automation tools.
Posted 2 months ago
8.0 - 13.0 years
12 - 22 Lacs
Pune
Hybrid
- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 10+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - 5+ years of people management exp.
Posted 2 months ago
6.0 - 10.0 years
9 - 18 Lacs
Hyderabad
Hybrid
Primary Skills (Mandatory top 3 skills) : AWS working experience AWS Glue or equivalent product experience Lambda functions Python programming Kubernetes knowledge Roles and Responsibilities: - Develop Code - Deployment - Testing - Bug fixing No of interview Rounds : 2 rounds , Face to Face is mandatory
Posted 2 months ago
7.0 - 12.0 years
15 - 30 Lacs
Pune, Ahmedabad
Work from Office
We are seeking a seasoned Lead Platform Engineer with a strong background in platform development and a proven track record of leading technology design and teams. The ideal candidate will have at least 8 years of overall experience, with a minimum of 5 years in relevant roles. This position entails owning module design and spearheading the implementation process alongside a team of talented platform engineers. Job Title: Lead Platform Engineer Job Location: Ahmedabad/Pune (Work from Office) Required Experience: 7+ Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Key Responsibilities: Lead the design and architecture of robust, scalable platform modules, ensuring alignment with business objectives and technical standards. Drive the implementation of platform solutions, collaborating closely with platform engineers and cross-functional teams to achieve project milestones. Mentor and guide a team of platform engineers, fostering an environment of growth and continuous improvement. Stay abreast of emerging technologies and industry trends, incorporating them into the platform to enhance functionality and user experience. Ensure the reliability and security of the platform through comprehensive testing and adherence to best practices. Collaborate with senior leadership to set technical strategy and goals for the platform engineering team. Requirements: Minimum of 8 years of experience in software or platform engineering, with at least 5 years in roles directly relevant to platform development and team leadership. Expertise in Python programming, with a solid foundation in writing clean, efficient, and scalable code. Proven experience in serverless application development, designing and implementing microservices, and working within event-driven architectures. Demonstrated experience in building and shipping high-quality SaaS platforms/applications on AWS, showcasing a portfolio of successful deployments. Comprehensive understanding of cloud computing concepts, AWS architectural best practices, and familiarity with a range of AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway. Exceptional problem-solving skills, with a proven ability to optimize complex systems for efficiency and scalability. Excellent communication skills, with a track record of effective collaboration with team members and successful engagement with stakeholders across various levels. Previous experience leading technology design and engineering teams, with a focus on mentoring, guiding, and driving the team towards achieving project milestones and technical excellence. Good to Have: AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications. Experience with the AWS Boto3 SDK for Python. Exposure to other cloud platforms such as Azure or GCP. Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |