Jobs
Interviews

867 Lambda Expressions Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

10 - 14 Lacs

gurugram

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Amazon Web Services (AWS) Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process, collaborating with team members, and ensuring project success. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process effectively- Ensure timely delivery of projects- Mentor and guide team members for skill enhancement Professional & Technical Skills: - Must To Have Skills: Proficiency in Amazon Web Services (AWS)- Strong understanding of cloud computing concepts- Experience in designing scalable and reliable applications on AWS- Knowledge of infrastructure as code and automation tools- Hands-on experience with AWS services like EC2, S3, Lambda, and RDS Additional Information:- The candidate should have a minimum of 7.5 years of experience in Amazon Web Services (AWS)- This position is based at our Gurugram office- A 15 years full-time education is required Qualification 15 years full time education

Posted Date not available

Apply

3.0 - 8.0 years

5 - 9 Lacs

bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Lambda Administration Good to have skills : AWS S3 (Simple Storage Service), Python (Programming Language), AWS Application Integration, Event Bridge, API GatewayMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to ensure the applications function as intended, while continuously seeking ways to enhance application efficiency and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to gather requirements and translate them into technical specifications.- Conduct thorough testing and debugging of applications to ensure high-quality deliverables. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Lambda Administration.- Good To Have Skills: Experience with AWS S3 (Simple Storage Service), Python (Programming Language), AWS Application Integration.- Strong understanding of serverless architecture and its implementation.- Experience in developing and deploying applications using AWS services.- Familiarity with application monitoring and performance tuning. Additional Information:- The candidate should have minimum 3 years of experience in AWS Lambda Administration.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted Date not available

Apply

5.0 - 9.0 years

9 - 13 Lacs

gurugram, bengaluru

Work from Office

About the Role: Grade Level (for internal use): 11 The RoleLead Cloud Engineer The Team: We are looking for a dynamic AWS Cloud Support Engineer to join our team, working across multiple AWS accounts to ensure seamless cloud operations. This is a varied role that requires deep technical expertise, strategic planning, and strong stakeholder communication. Collaboration is at the core of our team, so if you thrive in a fast-paced, problem-solving environment, we'd love to hear from you. The Impact: Contribute significantly to the growth of the firm byDeveloping innovative functionality in existing and new products/ Supporting and maintaining high revenue products. Whats in it for you: A collaborative team culture that values innovation and problem-solving. Opportunity to work on diverse projects spanning multiple AWS accounts. A chance to shape cloud strategy and architecture in a growing organizational division. Actively supported in taking learning opportunities . Exciting open-door collaboration within the EDO Agentic AI experience. Key Responsibilities: Architecture Planning: Design and refine AWS architectures to meet business needs, ensuring security, scalability, and cost-effectiveness. Cost Management: Keep an eye on infrastructure costs and recommendations, propose changes to stakeholders to reduce cloud spend and waste. Multi-Account Management: Oversee cloud environments across numerous AWS accounts, maintaining best practices for governance and security. Troubleshooting & Incident Response: Diagnose and resolve complex technical issues related to AWS services, infrastructure, and networking. Stakeholder Collaboration: Communicate effectively with teams across the organization, providing insights, technical recommendations, and status updates. Automation & Optimization: Develop scripts and tools to automate deployments, monitoring, and management processes. Security & Compliance: Ensure adherence to security policies and regulatory requirements within AWS environments. Continuous Improvement: Stay updated with AWS advancements and recommend improvements for existing cloud strategies. : Proven experience in AWS cloud infrastructure and services. Strong understanding of networking, security, and cloud architecture best practices. Proficiency in Terraform, CloudFormation, or other Infra as Code tools is a plus. Hands-on experience with EC2, S3, RDS, Lambda, VPC , Bedrock and other AWS services preferred. Ability to troubleshoot complex system and network issues across cloud environments. Excellent communication skills and the ability to work collaboratively in a team-oriented environment . AWS certifications (Solutions Architect, SysOps, or Developer) are preferred but not mandatory. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group)

Posted Date not available

Apply

7.0 - 12.0 years

2 - 5 Lacs

mumbai

Work from Office

Inviting applications for the role of Principal Consultant- AWS Developer We are seeking an experienced Developer with expertise in AWS-based big data solutions, particularly leveraging Apache Spark on AWS EMR, along with strong backend development skills in Java and Spring. The ideal candidate will also possess a solid background in data warehousing, ETL pipelines, and large-scale data processing systems.. Responsibilities Design and implement scalable data processing solutions using Apache Spark on AWS EMR. Develop microservices and backend components using Java and the Spring framework. Build, optimize, and maintain ETL pipelines for structured and unstructured data. Integrate data pipelines with AWS services such as S3, Lambda, Glue, Redshift, and Athena. Collaborate with data architects, analysts, and DevOps teams to support data warehousing initiatives. Write efficient, reusable, and reliable code following best practices. Ensure data quality, governance, and lineage across the architecture. Troubleshoot and optimize Spark jobs and cloud-based processing workflows. Participate in code reviews, testing, and deployments in Agile environments. Qualifications we seek in you! Minimum Qualifications Bachelors degree Preferred Qualifications/ Skills Strong experience with Apache Spark and AWS EMR in production environments. Solid understanding of AWS ecosystem, including services like S3, Lambda, Glue, Redshift, and CloudWatch. Proven experience in designing and managing large-scale data warehousing systems. Expertise in building and maintaining ETL pipelines and data transformation workflows. Strong SQL skills and familiarity with performance tuning for analytical queries. Experience working in Agile development environments using tools such as Git, JIRA, and CI/CD pipelines. Familiarity with data modeling concepts and tools (e.g., Star Schema, Snowflake Schema). Knowledge of data governance tools and metadata management. Experience with containerization (Docker, Kubernetes) and serverless architectures.

Posted Date not available

Apply

6.0 - 11.0 years

13 - 18 Lacs

pune

Hybrid

Who are we looking for? We are looking for a data bricks engineer (developer), having strong software development experience of 6 to 10 years on Apache Spark & Scala Technical Skills: Strong knowledge & hands-on experience in Apache Spark & Scala Experience in AWS S3, Redshift, EC2 and Lambda services Extensive experience in developing and deploying Bigdata pipelines Experience in Azure data lake Strong hands on in SQL development / Azure SQL and in-depth understanding of optimization and tuning techniques in SQL with Redshift Development in Notebooks (like Jupyter, DataBricks, Zeppelin etc) Development experience in Spark Experience in scripting language like python and any other programming language Roles and Responsibilities: Candidate must have hands on experience in AWS Data Databricks Good development experience using Python/Scala, Spark SQL and Data Frames Hands-on with Databricks, Data Lake and SQL knowledge is a must. Performance tuning, troubleshooting, and debugging SparkTM Process Skills: Agile Scrum Qualification: Bachelor of Engineering (Computer background preferred)

Posted Date not available

Apply

5.0 - 10.0 years

20 - 25 Lacs

noida

Work from Office

Description: Year of exp- 5 to 8 Years Location- Pune, Noida, Bangalore Requirements: Strong experience with MS SQL Server and Snowflake Solid knowledge of AWS cloud services (e.g., S3, EC2, Glue, Lambda) Proficient in Python and C# for scripting, API integration, and data processing Experience working with Apache NiFi for data ingestion and orchestration Hands-on with Apache Airflow for scheduling and managing workflows Proficient in Apache Spark for big data processing and transformation Experience working with Qubole or similar data lake platforms Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Job Responsibilities: Design and develop scalable ETL/ELT pipelines using Apache NiFi, Airflow, and Spark Implement and manage data workflows across AWS cloud services, including data ingestion, transformation, and storage Develop and optimize queries in MS SQL and Snowflake for efficient data retrieval and reporting Collaborate with data scientists, analysts, and application developers to ensure reliable and timely data delivery Build and maintain data lake solutions on Qubole, ensuring data quality, governance, and security Write clean, efficient code in Python and C# for automation, integration, and backend data processing tasks Ensure pipeline and job reliability through monitoring, alerting, and error handling Participate in performance tuning and optimization of data jobs and warehouse structures Drive best practices for data modeling, metadata management, and documentation What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted Date not available

Apply

3.0 - 8.0 years

5 - 9 Lacs

bengaluru

Work from Office

About The Role - Grade Specific We are seeking a talented Golang & Cloud Infrastructure Engineer with 3+ years of experience in building scalable backend systems and managing cloud infrastructure. The ideal candidate will have strong expertise in writing production-ready Go code, deploying containerized applications, and managing AWS resources using Terraform. This role requires a deep understanding of cloud architecture, CI/CD automation, and performance optimization. Responsibilities: Develop and maintain backend services using Go (Golang) for high-performance applications. Design and manage AWS infrastructure using ECS, Fargate, S3, CloudFront, and Terraform. Build and deploy containerized applications using Docker. Implement and maintain CI/CD pipelines for automated testing and deployment. Optimize system performance and troubleshoot production issues. Collaborate with cross-functional teams to deliver reliable and scalable solutions. Primary Skills: 3+ years of experience in Go (Golang) development. Strong hands-on experience with AWS servicesECS, Fargate, S3, CloudFront. Proficiency in Terraform for infrastructure as code. Solid understanding of Docker and container orchestration. Experience with CI/CD tools and deployment automation. Strong debugging and performance tuning capabilities. Secondary Skills: Experience with monitoring and observability tools (e.g., CloudWatch, Datadog). Familiarity with serverless AWS services (e.g., Lambda, API Gateway). Knowledge of cloud cost optimization strategies. Excellent written and verbal communication skills. Educational Qualification: Bachelors or Masters degree in Computer Science, Software Engineering, or a related field.

Posted Date not available

Apply

15.0 - 20.0 years

15 - 20 Lacs

mumbai

Work from Office

Project Role :Solution Architect Project Role Description : Translate client requirements into differentiated, deliverable solutions using in-depth knowledge of a technology, function, or platform. Collaborate with the Sales Pursuit and Delivery Teams to develop a winnable and deliverable solution that underpins the client value proposition and business case. Must have skills : Python (Programming Language) Good to have skills : AWS ArchitectureMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time educationKey Responsibilities:Define architectural vision and ensure scalability, maintainability, and performance.Design and implement event-driven architectures and microservices using AWS.Lead adoption of serverless frameworks and containerized solutions (Lambda, EKS, Docker).Establish best practices for data storage and processing with AWS S3 and DynamoDB.Architect and automate workflows using AWS EventBridge.Guide teams on designing and optimizing secure, high-performance RESTful APIs.Recommend database solutions and ensure efficient querying with SQL.Align architecture with business goals while ensuring compliance and security.Required Skills: Expertise in Python and large-scale system architecture.Deep knowledge of AWS services like EKS, Lambda, S3, DynamoDB, and EventBridge.Strong background in event-driven architectures and serverless computing.Experience with Docker, Kubernetes, and containerization.Advanced REST API design skills with a focus on performance and security.Proficiency in SQL and exposure to NoSQL databases.Strong leadership and communication skills. Qualification 15 years full time education

Posted Date not available

Apply

5.0 - 10.0 years

11 - 15 Lacs

hyderabad

Work from Office

Stellantis is seeking a passionate, innovative, results-oriented Information Communication Technology (ICT) Manufacturing AWS Cloud Architect to join the team. As a Cloud architect, the selected candidate will leverage business analysis, data management, and data engineering skills to develop sustainable data tools supporting Stellantiss Manufacturing Portfolio Planning. This role will collaborate closely with data analysts and business intelligence developers within the Product Development IT Data Insights team. Job responsibilities include but are not limited to Having deep expertise in the design, creation, management, and business use of large datasets, across a variety of data platforms Assembling large, complex sets of data that meet non-functional and functional business requirements Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS, cloud and other SQL technologies. Working with stakeholders to support their data infrastructure needs while assisting with data-related technical issues Maintain high-quality ontology and metadata of data systems Establish a strong relationship with the central BI/data engineering COE to ensure alignment in terms of leveraging corporate standard technologies, processes, and reusable data models Ensure data security and develop traceable procedures for user access to data systems Qualifications, Experience and Competency Education Bachelors or Masters degree in Computer Science, or related IT-focused degree Experience Essential Overall 10-15 years of IT experience Develop, automate and maintain the build of AWS components, and operating systems. Work with application and architecture teams to conduct proof of concept (POC) and implement the design in a production environment in AWS. Migrate and transform existing workloads from on premise to AWS Minimum 5 years of experience in the area of data engineering or data architectureconcepts, approach, data lakes, data extraction, data transformation Proficient in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies. Experience operating very large data warehouses or data lakes Investigate and develop new micro services and features using the latest technology stacks from AWS Self-starter with the desire and ability to quickly learn new technologies Strong interpersonal skills with ability to communicate & build relationships at all levels Hands-on experience from AWS cloud technologies like S3, AWS glue, Glue Catalog, Athena, AWS Lambda, AWS DMS, pyspark, and snowflake. Experience with building data pipelines and applications to stream and process large datasets at low latencies. Identifying, designing, and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes Desirable Familiarity with data analytics, Engineering processes and technologies Ability to work successfully within a global and cross-functional team A passion for technology. We are looking for someone who is keen to leverage their existing skills while trying new approaches and share that knowledge with others to help grow the data and analytics teams at Stellantis to their full potential! Specific Skill Requirement AWS services (GLUE, DMS, EC2, RDS, S3, VPCs and all core services, Lambda, API Gateway, Cloud Formation, Cloud watch, Route53, Athena, IAM) andSQL, Qlik sense, python/Spark, ETL optimization , If you are interested, please Share below details and Updated Resume Matched First Name Last Name Date of Birth Pass Port No and Expiry Date Alternate Contact Number Total Experience Relevant Experience Current CTC Expected CTC Current Location Preferred Location Current Organization Payroll Company Notice period Holding any offer

Posted Date not available

Apply

12.0 - 16.0 years

14 - 20 Lacs

pune

Work from Office

AI/ML/GenAI AWS SME Job Description Role Overview: An AWS SME with a Data Science Background is responsible for leveraging Amazon Web Services (AWS) to design, implement, and manage data-driven solutions. This role involves a combination of cloud computing expertise and data science skills to optimize and innovate business processes. Key Responsibilities: Data Analysis and Modelling: Analyzing large datasets to derive actionable insights and building predictive models using AWS services like SageMaker, Bedrock, Textract etc. Cloud Infrastructure Management: Designing, deploying, and maintaining scalable cloud infrastructure on AWS to support data science workflows. Machine Learning Implementation: Developing and deploying machine learning models using AWS ML services. Security and Compliance: Ensuring data security and compliance with industry standards and best practices. Collaboration: Working closely with cross-functional teams, including data engineers, analysts, DevOps and business stakeholders, to deliver data-driven solutions. Performance Optimization: Monitoring and optimizing the performance of data science applications and cloud infrastructure. Documentation and Reporting: Documenting processes, models, and results, and presenting findings to stakeholders. Skills & Qualifications Technical Skills: Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker). Strong programming skills in Python. Experience with AI/ML project life cycle steps. Knowledge of machine learning algorithms and frameworks (e.g., TensorFlow, Scikit-learn). Familiarity with data pipeline tools (e.g., AWS Glue, Apache Airflow). Excellent communication and collaboration abilities.

Posted Date not available

Apply

5.0 - 9.0 years

5 - 9 Lacs

bengaluru

Hybrid

PF Detection is mandatory : Managing data storage solutions on AWS, such as Amazon S3, Amazon Redshift, and Amazon DynamoDB. Implementing and optimizing data processing workflows using AWS services like AWS Glue, Amazon EMR,and AWS Lambda. Working with Spotfire Engineers and business analysts to ensure data is accessible and usable for analysisand visualization. Collaborating with other engineers, and business stakeholders to understand requirements and deliversolutions. Writing code in languages like SQL, Python, or Scala to build and maintain data pipelines and applications. Using Infrastructure as Code (IaC) tools to automate the deployment and management of datainfrastructure. A strong understanding of core AWS services, cloud concepts, and the AWS Well-Architected Framework Conduct an extensive inventory/evaluation of existing environments workflows. Designing and developing scalable data pipelines using AWS services to ensure efficient data flow andprocessing. Integrating / combining diverse data sources to maintain data consistency and reliability. Working closely with data engineers and other stakeholders to understand data requirements and ensureseamless data integration. Build and maintain CI/CD pipelines. Kindly Acknowledge back to this mail with updated Resume.

Posted Date not available

Apply

4.0 - 9.0 years

4 - 8 Lacs

hyderabad

Work from Office

Extensive experience with AWS services : IAM, S3, Glue, CloudFormation and CloudWatch In-depth understanding of AWS IAM policy evaluation for permissions and access control Proficient in using Bitbucket, Confluence, GitHub, and Visual Studio Code Proficient in policy languages, particularly Rego scriptingGood to Have Skills : Experience with the WIZ tool for security and compliance Good programming skills in Python Advanced knowledge of additional AWS services : ECS, EKS, Lambda, SNS and SQSRoles & ResponsibilitiesSenior Developer on the Wiz team specializing in Rego and AWS------ ------Project Manager - One to Three Years,AWS Cloud Formation - Four to Six Years,AWS IAM - Four to Six Years------PSP Defined SCU in Data engineer.

Posted Date not available

Apply

8.0 - 12.0 years

10 - 14 Lacs

pune

Hybrid

So, what’s the role all about? We are looking for a highly skilled and experienced Senior Specialist Software Engineer with strong expertise in C++ and .NET technologies to join our software development team. In this role, you will be responsible for designing, developing, and maintaining robust, scalable, and high-performance software applications aligned with business requirements and technical specifications. How will you make an impact? Apply a strong understanding of software development best practices, principles, and standards throughout the development lifecycle. Write clean, efficient, and high-quality code that adheres to coding standards and software engineering best practices. Stay current with the latest trends, technologies, and methodologies in software development and incorporate them into project work. Provide technical guidance and support to team members, helping to resolve complex technical challenges. Conduct thorough code reviews and provide constructive feedback to ensure code quality and maintainability. Demonstrate deep knowledge of modern strong expertise in .NET technologies and C++ standards , along with a solid understanding of object-oriented design principles, design patterns, and software architecture. Work on large-scale applications and manage complex codebases effectively, leveraging strong knowledge of algorithms and data structures . Optimize application performance and use profiling and debugging tools to identify and address bottlenecks and issues. Utilize AWS cloud services for application development, deployment, and monitoring. This includes working with services such as EC2, S3, Lambda, CloudWatch, RDS , and ECS/EKS . Design and implement cloud-native or cloud-migrated solutions using AWS architecture best practices. Collaborate effectively with cross-functional teams and exhibit strong communication and interpersonal skills. Manage and track project timelines to ensure timely delivery of milestones and project goals. Promote and enforce adherence to software development best practices within the team. Mentor and coach junior developers, supporting their professional development and technical growth. Have you got what it takes? Bachelor’s degree in computer science , Software Engineering , or a related field. 8 to 12 years of professional experience in software development using .NET and C++ technologies. Strong understanding of Object-Oriented Programming (OOP) principles and experience applying design patterns in real-world scenarios. Hands-on experience in telephony systems , including VoIP , media streaming , SIP signaling , and RTP protocols. Deep knowledge of software development best practices , including design principles, testing strategies, version control, and continuous integration. Experience in database design and development using SQL Server or similar relational database systems. Proficient with development tools such as Visual Studio , Git , and JIRA . Strong analytical and problem-solving skills , with a focus on performance and scalability. Excellent verbal and written communication skills , with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Proven ability to work independently as well as collaboratively in a team-oriented environment. Self-motivated, detail-oriented, and committed to continuous learning and improvement. Nice to Have: Experience working with public cloud platforms , preferably AWS . Hands-on experience in developing and deploying applications. Practical understanding of microservices architecture and distributed systems. Familiarity with Contact Center as a Service (CCaaS) platforms and Automatic Call Distribution (ACD) systems. Working knowledge of Agile/Scrum software development methodologies. Experience with C++, C#, .NET, and .NET Core for modern application development. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8260 Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted Date not available

Apply

7.0 - 11.0 years

7 - 17 Lacs

hyderabad

Work from Office

Role & responsibilities Deploy and support automated AWS cloud-based tools and environments in support of application teams. Analyze and response to incidents and problems including the development of automated monitoring and remediation to maintain uptime and expected service levels. This includes cloud infrastructure, applications, middleware, and other 3rd party software. Analyze and resolve problems associated with the operating systems and middleware, for example Red hat Linux, JBoss, Apache, Tomcat, Windows Server, IIS, etc. Manage, configure, respond, and resolve AWS Security alerts including vulnerabilities and patch management. Design, generate and interpret operational reports related to system health status, capacity management and system performance management. Determine root cause for incidents, correlate recurring incidents to systemic problems, and drive towards resolution. Contribute to the build-out of cloud infrastructure, for example, working with services such as load balancers, gateways, firewalls, subnets, security groups, and storage options. Use scripting and automation tools to increase efficiency, performance, and cost reductions, for example CloudFormation, Terraform, Unix Shell, Python, PowerShell, Ansible, etc. Participate in the development of Systems Engineering departmental architecture, standards, and guidelines. Work closely with application teams following Agile methods and principles. Contribute and collaborate to design, document, and publish Engineering standards, principles, guidelines, and best practices. Seek opportunities to increase efficiency through research and investigation, application team input, automation options, POCs, etc. Adhere to ethical standards and comply with the laws and regulations applicable to your job function. Key Skills (Must-Have): Experience with core AWS services like EC2, S3, SNS, Lambda, CloudWatch and CloudTrail. Experience in the design, development, and implementation of AWS-based infrastructure solutions using AWS APIs, and Python with boto3. Strong scripting experience in Python and PowerShell/Bash. Windows and Linux system administration: OS, middleware, application layer Server, network, and storage performance benchmarking and optimization. In-depth understanding of the operational dependencies of applications, networks, systems, security, and policy. Experience with cloud orchestrations tools like AWS CloudFormation and/or terraform, with an emphasis on creating modular architecture. Experience with AWS IAM. Proficient in using Git branching, push/pull requests, and advanced Git workflows. Preferred candidate profile Experience with Jenkins, Ansible or similar tools. Experience with application build technologies. Demonstrated knowledge of DevOps principles. Hands-on experience required. Strong networking knowledge, preferably with DNS, subnets, routing, security groups, whitelisting, firewalls, and various networking infrastructure. CDK, Control Tower, AWS Control Tower Customization Solution Experience in containerization and orchestration using Docker, Kubernetes, or Fargate/EKS/ECS. Familiar with analytics and log aggregation tools such as Splunk or Microsoft BI Job Type: Permanent Role: Cloud Engineer Location: Hyderabad Experience: 7+ yrs Notice Period : Immediate- 15 Days

Posted Date not available

Apply

8.0 - 13.0 years

20 - 35 Lacs

pune

Work from Office

Position: Senior AWS Cloud Engineer Location: Smartworks,43EQ,Balewadi High Street, Pune Shift: 4:30 PM IST 1:30 AM IST (First 3 Months), Flexible to Regular IST Hours Thereafter About Reliable Group Reliable Group is a US-based company headquartered in New York , with two offices in India: New Mumbai (Airoli) Smartworks, 43EQ, Balewadi High Street, Pune We operate across three key business verticals : On-Demand Providing specialized technology talent for global clients. GCC (Global Capability Centers) Partnering with enterprises to build and scale their India operations. Product Development Our in-house AI/ML product company develops AI chatbots and intelligent solutions for US healthcare and insurance companies . About This Opportunity This role is for one of Reliable Groups biggest GCC accounts (RSC India) , which we are building in Pune. We are on a mission to hire 1,000+ people for this account over the next phase. You will be joining the founding team for this GCC and playing a critical role in shaping its AWS cloud infrastructure from the ground up.The client is the second-largest healthcare company in the USA , ranked in the Fortune 50 , offering a unique opportunity to work on high-impact, enterprise-scale cloud solutions in the healthcare domain. Role Overview We are seeking a highly skilled Senior AWS Cloud Engineer with proven experience in building AWS environments from the ground up —not just consuming existing services. This role requires an AWS builder mindset, capable of designing, provisioning, and managing multi-account AWS architectures , networking, security, and database platforms end-to-end. Key Responsibilities: AWS Environment Provisioning: Design and provision multi-account AWS environments using best practices (Control Tower, Organizations). Set up and configure networking (VPC, Transit Gateway, Private Endpoints, Subnets, Routing, Firewalls) . Provision and manage AWS database platforms (RDS, Aurora, DynamoDB) with high availability and security. Manage full AWS account lifecycle, including IAM roles, policies, and access controls. Infrastructure as Code (IaC): Develop and maintain AWS infrastructure using Terraform and AWS CloudFormation . Automate account provisioning, networking, and security configuration. Security & Compliance: Implement AWS security best practices, including IAM governance , encryption, and compliance automation. Use tools like AWS Config, GuardDuty, Security Hub, and Vault to enforce standards. Automation & CI/CD: Create automation scripts in Python, Bash, or PowerShell for provisioning and management tasks. Integrate AWS infrastructure with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD). Monitoring & Optimization: Implement monitoring solutions (CloudWatch, Prometheus, Grafana) for infrastructure health and performance. Optimize cost, performance, and scalability of AWS environments. Required Skills & Experience: 8+ years of experience in Cloud Engineering, with 4 + years focused on AWS provisioning . Strong expertise in: AWS multi-account setup (Control Tower/Organizations) VPC design and networking (Transit Gateway, Private Endpoints, routing, firewalls) IAM policies, role-based access control, and security hardening Database provisioning (RDS, Aurora, DynamoDB) Proficiency in Terraform and AWS CloudFormation . Hands-on experience with scripting (Python, Bash, PowerShell). Experience with CI/CD pipelines and automation tools. Familiarity with monitoring and logging tools. Preferred Certifications AWS Certified Solutions Architect – Professional AWS Certified DevOps Engineer – Professional HashiCorp Certified: Terraform Associate Work Schedule Initial 3 months: 4:30 PM IST – 1:30 AM IST to work with the US team.After ramp-up: Option to transition to regular India working hours.

Posted Date not available

Apply

5.0 - 10.0 years

20 - 25 Lacs

pune

Work from Office

Lead Software Engineer (AWS Cloud, Platform Engineering)We are seeking an experienced and motivated Lead Software Engineer to join Mastercard's AWS Platform Engineering Team. This role is critical in designing, building, and maintaining a scalable, secure, and highly available cloud platform on AWS. You will collaborate with cross-functional teams to ensure optimal platform performance, cost efficiency, and alignment with best practices. The ideal candidate is expected to have a strong understanding of AWS services, DevOps practices, and infrastructure automation and to focus on enabling application teams to deliver value rapidly and securely. The Role Design, implement, and maintain a scalable multi-account AWS platform, leveraging services like AWS Organizations, VPC, IAM, EKS, EC2, S3, RDS, Glue, EMR, MSK, etc. Develop and manage infrastructure using tools like CloudFormation/CDK. Manage secure connectivity using technologies like AWS PrivateLink, Transit Gateway, and Direct Connect. Implement and maintain secure access controls and guardrails using AWS Control Tower, Service Control Policies (SCPs), and IAM. Engage and improve the lifecycle of the AWS platform and services -- from development to deployment, operation, and refinement. Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve reliability and velocity. Practice sustainable incident response and blameless postmortem. Proven experience in leading engineering teams, mentoring engineers, and driving technical excellence. Ability to lead architecture discussions, conduct code reviews, and foster a collaborative engineering culture All About You 5+ years of experience in AWS cloud engineering or similar roles. Strong understanding of Object-Oriented Programming (OOP) principles and experience applying them in languages like Python, TypeScript, and Java. Fluent in AWS Cloud Development Kit (CDK) Proficient with AWS Services esp. EKS, EC2, RDS, Lambda, API Gateway S3, Route 53, MSK, Glue, EMR, etc. Strong knowledge of networking in AWS (VPC, Direct Connect, PrivateLink, Transit Gateway, etc.). Experience with CI/CD tools like AWS CodePipeline, Jenkins, BitBucket/GitHub, Artifactory, Sonarqube, etc. Strong knowledge of the best practices around Logging, Monitoring, and Alerting solutions. Experience with software deployment and configuration automation. Expertise in designing, analyzing, and troubleshooting large-scale systems. Ability to debug, optimize code, and automate routine tasks. Systematic problem-solving approach, with effective communication skills and a sense of drive. Hands-on experience with AWS Control Tower, including setting up guardrails, managing Service Control Policies (SCPs), and configuring Landing Zones. Knowledge of security best practices and frameworks.

Posted Date not available

Apply

5.0 - 8.0 years

13 - 17 Lacs

mumbai, pune

Work from Office

Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX).

Posted Date not available

Apply

4.0 - 6.0 years

2 - 6 Lacs

chennai

Work from Office

Bachelors Degree in Computer Science/Engineering/Applied Math/Information Systems, or Information Technology Masters degree in Information Technology /Computer Science is a plus 4+ years of relevant experience Extensive current working knowledge of AWS technologies including Lambda, S3, SageMaker, Athena, etc Extensive current working SQL knowledge Experience in migration of legacy data into new cloud-based technologies Experience in various RDBMS and NOSQL platforms Working knowledge of MySql, SQL Server, Data Warehousing and ETL Tools Coding Knowledge (Stored procedures & Python) Location: Chennai-6 PM to 3 AM IST (US client time).

Posted Date not available

Apply

5.0 - 8.0 years

2 - 6 Lacs

chennai

Work from Office

AWS: Lambda Glue Kafka/Kinesis RDBMS Oracle, MySQL, RedShift, PostgreSQL, Snowflake Gateway Cloudformation / Terraform Step Functions Cloudwatch Python Pyspark Job role & responsibilities: Looking for a Software Engineer/Senior Software engineer with hands on experience in ETL projects and extensive knowledge in building data processing systems with Python, pyspark and Cloud technologies(AWS). Experience in development in AWS Cloud (S3, Redshift, Aurora, Glue, Lambda, Hive, Kinesis, Spark, Hadoop/EMR) Required Skills: Amazon Kinesis, Amazon Aurora, Data Warehouse, SQL, AWS Lambda, Spark, AWS QuickSight Advanced Python Skills Data Engineering ETL and ELT Skills Experience of Cloud Platforms (AWS or GCP or Azure) Mandatory skills: Datawarehouse, ETL, SQL, Python, AWS Lambda, Glue, AWS Redshift.

Posted Date not available

Apply

3.0 - 5.0 years

4 - 8 Lacs

hyderabad

Work from Office

Roles & Responsibilities: 3+ years of working experience in data engineering. Hands-on keyboard' AWS implementation experience across a broad range of AWS services. Must have in depth AWS development experience (Containerization - Docker, Amazon EKS, Lambda, EC2, S3, Amazon DocumentDB, PostgreSQL) Strong knowledge of DevOps and CI/CD pipeline (GitHub, Jenkins, Artifactory) Scripting capability and the ability to develop AWS environments as code Hands-on AWS experience with at least 1 implementation (preferred in an Enterprise scale environment) Experience with core AWS platform architecture, including areas such as: Organizations, Account Design, VPC, Subnet, segmentation strategies. Backup and Disaster Recovery approach and design Environment and application automation CloudFormation and third-party automation approach/strategy Network connectivity, Direct Connect and VPN AWS Cost Management and Optimization Skilled experience in Python libraries (NumPy, Pandas dataframe)

Posted Date not available

Apply

7.0 - 12.0 years

7 - 11 Lacs

pune

Hybrid

What You'll Do We are looking for a Senior Data Analytics Professional with the ability to blend advanced analytics, AI/ML, stakeholder communication, to drive data-centric strategies. You will go beyond traditional BI: you'll using modern tools and creating impactful stories that guide critical decisions. You have a background in data analytics and visualization, demonstrated ability to apply GenAI techniques and the confidence to effectively communicate and present to executive stakeholders. What Your Responsibilities Will Be Apply advanced analytics and AI/ML techniques to solve complex business problems and create value for customers Conceptualize business problems and develop scalable, data-driven frameworks and solutions Design scalable data pipelines and transformations using SQL , Python , and Snowflake Manage intuitive dashboards, reports, and alerts using tools like Power BI Use cloud platforms ( Snowflake/AWS ) for data orchestration, storage, and compute (e.g., S3, Redshift, Lambda, Cortex ) Build GenAI-based solutions. Experience with RAG modeling Manage engagements independently, in solving challenges, and build client relationships Ensure the accuracy, quality, and reliability of analytics output and data sources Provide training and support to our users and contribute to a culture of data-driven decision making Collaborate across teams (product, finance, operations, marketing) with a diversity of perspectives and technical backgrounds What You'll Need to be Successful 7+ years of experience in analytics, data science, or related roles. Proficient in SQL , Python , and cloud-based data platforms like Snowflake , AWS , or similar Experience with Power BI , Cortex , and other BI tools (Tableau, Looker, etc.) Expertise in Generative AI , LLMs , and RAG modeling Experience with data modeling, ELT/ETL (e.g., dbt, Airflow), and scalable analytics architecture Experience managing client-facing engagements or large-scale analytics programs Work in collaborative, cross-functional teams with varying technical backgrounds

Posted Date not available

Apply

7.0 - 12.0 years

7 - 11 Lacs

pune

Hybrid

What You'll Do We are looking for a Senior Data Analytics Professional with the ability to blend advanced analytics, AI/ML, stakeholder communication, to drive data-centric strategies. You will go beyond traditional BI: you'll using modern tools and creating impactful stories that guide critical decisions. You have a background in data analytics and visualization, demonstrated ability to apply GenAI techniques and the confidence to effectively communicate and present to executive stakeholders. What Your Responsibilities Will Be Apply advanced analytics and AI/ML techniques to solve complex business problems and create value for customers Conceptualize business problems and develop scalable, data-driven frameworks and solutions Design scalable data pipelines and transformations using SQL , Python , and Snowflake Manage intuitive dashboards, reports, and alerts using tools like Power BI Use cloud platforms ( Snowflake/AWS ) for data orchestration, storage, and compute (e.g., S3, Redshift, Lambda, Cortex ) Build GenAI-based solutions. Experience with RAG modeling Manage engagements independently, in solving challenges, and build client relationships Ensure the accuracy, quality, and reliability of analytics output and data sources Provide training and support to our users and contribute to a culture of data-driven decision making Collaborate across teams (product, finance, operations, marketing) with a diversity of perspectives and technical backgrounds What You'll Need to be Successful 7+ years of experience in analytics, data science, or related roles. Proficient in SQL , Python , and cloud-based data platforms like Snowflake , AWS , or similar Experience with Power BI , Cortex , and other BI tools (Tableau, Looker, etc.) Expertise in Generative AI , LLMs , and RAG modeling Experience with data modeling, ELT/ETL (e.g., dbt, Airflow), and scalable analytics architecture Experience managing client-facing engagements or large-scale analytics programs Work in collaborative, cross-functional teams with varying technical backgrounds

Posted Date not available

Apply

7.0 - 12.0 years

18 - 32 Lacs

hyderabad

Hybrid

Urgent job opening for Java(AWS) Developer and Java Full stack Developer . Role : Java(AWS) Developer Location :Hyderabad Work Mode: Hybrid JOB Description: Technology Stack(Primary Skill) Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates. Secondary Skill Set Agile Methodology, CICD or any of the cloud tools Job Description (Primary Responsibilities) 4 to 8 years of experience in design, development and triaging for large, complex systems. Experience in Java and object-oriented design skills 3-4+ years of microservices development 2+ years working in Spring Boot Experienced using API dev tools like IntelliJ/Eclipse, Postman, Git, Cucumber Hands on experience in building microservices based application using Spring Boot and REST, JSON DevOps understanding containers, cloud, automation, security, configuration management, CI/CD Experience using CICD processes for application software integration and deployment using Maven, Git, Jenkins. Experience dealing with NoSQL databases like Cassandra Experience building scalable and resilient applications in private or public cloud environments and cloud technologies Experience in Utilizing tools such as Maven, Docker, Kubernetes, ELK, Jenkins Agile Software Development (typically Scrum, Kanban, Safe) Experience with API gateway and API security. Java Full Stack - Java , Spring boot ,Microservice ,react , React.js Interested candidate please share your update resume on email id: Bhanupriya.amarnath@zensar.com

Posted Date not available

Apply

4.0 - 6.0 years

3 - 6 Lacs

bengaluru

Work from Office

Roles and Responsibilities: Experience in planning and delivering software platforms Deep expertise and hands on experience with Web Applications and programming languages such as HTML, CSS, .Net Core, JavaScript,angular, .Net JQuery and APIs Knowledge on PostgreSQL, SQL etc. .Net Core , c# AWS , Lambda, SQS , SNS My SQL Website and software application designing, building, or maintaining The position requires constant communication with colleagues. Candidate must have a strong understanding of UI, cross-browser compatibility, general web functions and standards. Conferring with teams to resolve conflicts, prioritize needs, develop content criteria, or choose solutions. Evaluating code to ensure it meets industry standards, is valid, is properly structured, and is compatible with browsers, devices, or operating systems

Posted Date not available

Apply

4.0 - 6.0 years

3 - 6 Lacs

bengaluru

Work from Office

Roles and Responsibilities: Experience in planning and delivering software platforms Deep expertise and hands on experience with Web Applications and programming languages such as HTML, CSS, .Net Core, JavaScript,angular, .Net JQuery and APIs Knowledge on PostgreSQL, SQL etc. .Net Core , c# AWS , Lambda, SQS , SNS My SQL Website and software application designing, building, or maintaining The position requires constant communication with colleagues. Candidate must have a strong understanding of UI, cross-browser compatibility, general web functions and standards. Conferring with teams to resolve conflicts, prioritize needs, develop content criteria, or choose solutions. Evaluating code to ensure it meets industry standards, is valid, is properly structured, and is compatible with browsers, devices, or operating systems

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies