Jobs
Interviews

513 Lambda Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Application System Development Engineer, you will be responsible for designing, developing, and maintaining applications both independently and collaboratively within a team. Your primary focus will be to adhere to project priorities and schedules, ensuring timely completion of assigned projects while also striving to enhance system quality and efficiency through process improvements. Your key responsibilities will include designing, developing, and maintaining applications, upholding the quality and security of development tasks, and following best design and development practices. You will work in alignment with project priorities and schedules, while also embracing agile methodologies and project management practices. Additionally, you will assist in technical and project risk management, collaborate with the team on daily development tasks, and provide support and guidance to junior developers. To excel in this role, you must possess expertise in application development using HTML5, CSS3, and Angular programming languages, backed by a minimum of 5 years of relevant experience and a Bachelor's degree. Your experience should include creating application UI in projects using frameworks like AngularJS or Angular Materials, as well as a strong understanding of design patterns, application architecture, and SOLID principles. Furthermore, you should demonstrate proficiency in writing unit test cases using Jasmine/Jest, E2E test cases with Protractor, creating tool scripts, and implementing interfaces for applications using web sockets. Experience with complete software development life cycles, agile methodologies, technical mentoring, and strong communication skills are essential for this role. Ideally, you would have hands-on experience with frameworks like NRWL, knowledge of audio domains and related frameworks, and exposure to working in multicultural environments. Proficiency in project management tools such as Jira, Contour, Confluence, and configuration management systems like Git or SVN is preferred. Strong experience with AWS services, Node, TypeScript, JavaScript, and frameworks like NestJS, as well as familiarity with CloudFormation for IaC, are highly valued. In conclusion, as a Senior Application System Development Engineer at Shure, you will join a global audio equipment manufacturer with a rich history of quality and innovation. If you are passionate about creating a diverse, equitable, and inclusive work environment and possess the skills required for this role, we encourage you to apply and be part of our dynamic team at Shure. Shure is a renowned audio brand with a mission to be the most trusted worldwide, offering a supportive and inclusive culture, flexible work arrangements, and endless opportunities for growth. Headquartered in the United States, Shure operates globally with regional offices and facilities across the Americas, EMEA, and Asia, making a meaningful impact in the audio industry for nearly a century.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

kochi, kerala

On-site

As a Java Backend Developer in our IoT domain team based in Kochi, you will be responsible for designing, developing, and deploying scalable microservices using Spring Boot, SQL databases, and AWS services. Your role will involve leading the backend development team, implementing DevOps best practices, and optimizing cloud infrastructure. Your key responsibilities will include architecting and implementing high-performance, secure backend services using Java (Spring Boot), developing RESTful APIs and event-driven microservices with a focus on scalability and reliability, designing and optimizing SQL databases (PostgreSQL, MySQL), and deploying applications on AWS using services like ECS, Lambda, RDS, S3, and API Gateway. You will also be responsible for implementing CI/CD pipelines, monitoring and improving backend performance, ensuring security best practices, and authentication using OAuth, JWT, and IAM roles. The required skills for this role include proficiency in Java (Spring Boot, Spring Cloud, Spring Security), microservices architecture, API development, SQL (PostgreSQL, MySQL), ORM (JPA, Hibernate), DevOps tools (Docker, Kubernetes, Terraform, CI/CD, GitHub Actions, Jenkins), AWS cloud services (EC2, Lambda, ECS, RDS, S3, IAM, API Gateway, CloudWatch), messaging systems (Kafka, RabbitMQ, SQS, MQTT), testing frameworks (JUnit, Mockito, Integration Testing), and logging & monitoring tools (ELK Stack, Prometheus, Grafana). Preferred skills that would be beneficial for this role include experience in the IoT domain, work experience in startups, event-driven architecture using Apache Kafka, knowledge of Infrastructure as Code (IaC) with Terraform, and exposure to serverless architectures. In return, we offer a competitive salary, performance-based incentives, the opportunity to lead and mentor a high-performing tech team, hands-on experience with cutting-edge cloud and microservices technologies, and a collaborative and fast-paced work environment. If you have any experience in the IoT domain and are looking for a full-time role with a day shift schedule in an in-person work environment, we encourage you to apply for this exciting opportunity in Kochi.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a Workday Technical Specialist at Nasdaq Technology in Bangalore, India, you will be a key player in delivering complex technical systems to both new and existing customers. Your role will involve discovering and implementing innovative technologies within the FinTech industry, contributing to the continuous revolutionizing of markets and adoption of new solutions. You will work as part of the Enterprise Solutions team, driving central initiatives across Nasdaq's corporate technology portfolio of Software Products and Software Services. In this role, you will collaborate with a global team to deliver critical solutions and services to Nasdaq's finance processes and operations. Your responsibilities will include designing and configuring applications to meet business requirements, maintaining integrations with internal systems and third-party vendors, and documenting technical solutions for future reference. You will also participate in end-to-end testing, build test cases, and follow established processes to ensure the quality of your work. To excel in this position, you are expected to have at least 10 to 13 years of software development experience, expertise in Workday Integration tools, Web services programming, and a strong understanding of the Workday Security model. A Bachelor's or Master's degree in computer science or a related field is required. Additionally, knowledge of Workday Finance modules, experience in multinational organizations, familiarity with middleware systems and ETL tools, as well as exposure to AWS services, would be beneficial. Nasdaq offers a vibrant and entrepreneurial work environment that encourages employees to take initiative, challenge the status quo, and embrace work-life balance. You will have the opportunity to grow within the Enterprise Solutions team, collaborate with experts globally, and contribute to cutting-edge technology solutions. If you resonate with Nasdaq's values and are eager to deliver top technology solutions to today's markets, we encourage you to apply in English at your earliest convenience. As part of the selection process, we will review applications and aim to provide feedback within 2-3 weeks. Join us at Nasdaq for an enriching career experience that includes an annual monetary bonus, opportunities to become a Nasdaq shareholder, health insurance programs, flexible working schedules, internal mentorship initiatives, and a wide range of online learning resources. Embrace the culture of innovation, connectivity, and empowerment at Nasdaq, where diversity and inclusion are celebrated, and every individual is valued for their authentic self.,

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

maharashtra

On-site

As a Lead Software Engineer at NEC Software Solutions (India) Private Limited, you will be part of a dynamic team working on innovative applications that utilize AI to enhance efficiency within the Public Safety sector. With over 10-15 years of experience, your primary expertise in Python and React will be crucial in developing new functionalities for an AI-enabled product roadmap. Your role will involve collaborating closely with the product owner and Solution Architect to create robust, market-ready software products meeting the highest engineering and user experience standards. Your responsibilities will include writing reusable, testable, and efficient Python code, working on Document and image processing libraries, API Gateway, backend CRUD operations, and cloud infrastructure preferably AWS. Additionally, your expertise in TypeScript & React for frontend development, designing clean user interfaces, and backend programming for web applications will be instrumental in delivering software features from concept to production. Your personal attributes such as problem-solving skills, inquisitiveness, autonomy, motivation, integrity, and big picture awareness will play a vital role in contributing to the team's success. Moreover, you will have the opportunity to develop new skills, lead technical discussions, and actively engage in self-training and external training sessions to enhance your capabilities. As a Senior Full Stack Engineer, you will actively participate in discussions with the Product Owner and Solution Architect, ensure customer-centric development, oversee the software development lifecycle, and implement secure, scalable, and resilient solutions for NECSWS products. Your role will also involve providing support for customers and production systems to ensure seamless operations. The ideal candidate for this role should hold a graduate degree, possess outstanding leadership qualities, and have a strong background in IT, preferably with experience in public sector or emergency services. If you are someone who thrives in a challenging environment, enjoys working with cutting-edge technologies, and is passionate about delivering high-quality software solutions, we invite you to join our team at NEC Software Solutions (India) Private Limited.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Platform Engineer Lead at Barclays, your role is crucial in building and maintaining systems that collect, store, process, and analyze data, including data pipelines, data warehouses, and data lakes. Your responsibility includes ensuring the accuracy, accessibility, and security of all data. To excel in this role, you should have hands-on coding experience in Java or Python and a strong understanding of AWS development, encompassing various services such as Lambda, Glue, Step Functions, IAM roles, and more. Proficiency in building efficient data pipelines using Apache Spark and AWS services is essential. You are expected to possess strong technical acumen, troubleshoot complex systems, and apply sound engineering principles to problem-solving. Continuous learning and staying updated with new technologies are key attributes for success in this role. Design experience in diverse projects where you have led the technical development is advantageous, especially in the Big Data/Data Warehouse domain within Financial services. Additional skills in enterprise-level software solutions development, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, and Kinesis are highly valued. Effective communication, collaboration with cross-functional teams, documentation skills, and experience in mentoring team members are also important aspects of this role. Your accountabilities will include the construction and maintenance of data architectures pipelines, designing and implementing data warehouses and data lakes, developing processing and analysis algorithms, and collaborating with data scientists to deploy machine learning models. You will also be expected to contribute to strategy, drive requirements for change, manage resources and policies, deliver continuous improvements, and demonstrate leadership behaviors if in a leadership role. Ultimately, as a Data Platform Engineer Lead at Barclays in Pune, you will play a pivotal role in ensuring data accuracy, accessibility, and security while leveraging your technical expertise and collaborative skills to drive innovation and excellence in data management.,

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Strong experience in core Python in Application Development AWS-AWS Application development experience(Backend Services Development) AWS Services: API Gateway, Lambda Functions, Step Functions, EKS/ ECS/ EC2, S3, SQS/SNS/ EventBridge, RDS, Cloudwatch, ELB System Design-LLD(Low level Design document) using OOps, SOLID and other design principles. Gen AI- Nice to have( Knowledge of LLMs, RAG, Finetuning, LangChain , Prompt Engineering Experience in working with Stakeholders, (Business, Technology) for requirement refinement to create acceptance criteria HLD and LLD Hands on in designing (competent with designing tools like draw.io/ Lucid/ Visio etc) and developing End to End tasks Handson with alteast one RDBMS (Postgresql/ MYSQL/ Oracle DB ), Knowledge of ORM (SQLalchemy). Good to have: Mongo DB/ Dynamo DB. Excellent communication skills(Written & Oral)

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Solutions Architect with over 7 years of experience, you will have the opportunity to leverage your expertise in cloud data solutions to architect scalable and modern solutions on AWS. In this role at Quantiphi, you will be a key member of our high-impact engineering teams, working closely with clients to solve complex data challenges and design cutting-edge data analytics solutions. Your responsibilities will include acting as a trusted advisor to clients, leading discovery/design workshops with global customers, and collaborating with AWS subject matter experts to develop compelling proposals and Statements of Work (SOWs). You will also represent Quantiphi in various forums such as tech talks, webinars, and client presentations, providing strategic insights and solutioning support during pre-sales activities. To excel in this role, you should have a strong background in AWS Data Services including DMS, SCT, Redshift, Glue, Lambda, EMR, and Kinesis. Your experience in data migration and modernization, particularly with Oracle, Teradata, and Netezza to AWS, will be crucial. Hands-on experience with ETL tools such as SSIS, Informatica, and Talend, as well as a solid understanding of OLTP/OLAP, Star & Snowflake schemas, and data modeling methodologies, are essential for success in this position. Additionally, familiarity with backend development using Python, APIs, and stream processing technologies like Kafka, along with knowledge of distributed computing concepts including Hadoop and MapReduce, will be beneficial. A DevOps mindset with experience in CI/CD practices and Infrastructure as Code is also desired. Joining Quantiphi as a Solutions Architect is more than just a job it's an opportunity to shape digital transformation journeys and influence business strategies across various industries. If you are a cloud data enthusiast looking to make a significant impact in the field of data analytics, this role is perfect for you.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As an AWS Data Engineer at Sufalam Technologies, located in Ahmedabad, India, you will be responsible for designing and implementing data engineering solutions on AWS. Your role will involve developing data models, managing ETL processes, and ensuring the efficient operation of data warehousing solutions. Collaboration with Finance, Data Science, and Product teams is crucial to understand reconciliation needs and ensure timely data delivery. Your expertise will contribute to data analytics activities supporting business decision-making and strategic goals. Key responsibilities include designing and implementing scalable and secure ETL/ELT pipelines for processing financial data. Collaborating closely with various teams to understand reconciliation needs and ensuring timely data delivery. Implementing monitoring and alerting for pipeline health and data quality, maintaining detailed documentation on data flows, models, and reconciliation logic, and ensuring compliance with financial data handling and audit standards. To excel in this role, you should have 5-6 years of experience in data engineering with a strong focus on AWS data services. Hands-on experience with AWS Glue, Lambda, S3, Redshift, Athena, Step Functions, Lake Formation, and IAM is essential for secure data governance. A solid understanding of data reconciliation processes in the finance domain, strong SQL skills, experience with data warehousing and data lakes, and proficiency in Python or PySpark for data transformation are required. Knowledge of financial accounting principles or experience working with financial datasets (AR, AP, General Ledger, etc.) would be beneficial.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have a solid understanding of object-oriented programming and software design patterns. Your responsibilities will include designing, developing, and maintaining Web/Service/desktop applications using .NET, .NET Core, and React.js. Additionally, you will be working on React.js/MVC and front-end development, AWS services like ASG, EC2, S3, Lambda, IAM, AMI, CloudWatch, Jenkins, RESTful API development, and integration. It is important to have familiarity with database technologies such as SQL Server and ensure the performance, quality, and responsiveness of applications. Collaboration with cross-functional teams to define, design, and ship new features will be crucial. Experience with version control systems like Git/TFS is required, with a preference for GIT. Excellent communication and teamwork skills are essential, along with familiarity with Agile/Scrum development methodologies. This is a full-time position with benefits including cell phone reimbursement. The work location is in person.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a DevOps Engineer, you will play a crucial role in constructing and managing a robust, scalable, and reliable 0-downtime platform. You will be actively involved in a newly initiated greenfield project that utilizes modern infrastructure and automation tools to support our engineering teams. This presents a valuable opportunity to collaborate with an innovative team, fostering a culture of fresh thinking, integrating AI and automation, and contributing to our cloud-native journey. If you are enthusiastic about automation, cloud infrastructure, and delivering high-quality production-grade platforms, this position provides you with the opportunity to create a significant impact. Your primary responsibilities will include: - **Hands-On Development**: Design, implement, and optimize AWS infrastructure by engaging in hands-on development using Infrastructure as Code (IaC) tools. - **Automation & CI/CD**: Develop and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate rapid, secure, and seamless deployments. - **Platform Reliability**: Ensure the high availability, scalability, and resilience of our platform by leveraging managed services. - **Monitoring & Observability**: Implement and oversee proactive observability using tools like DataDog to monitor system health, performance, and security, ensuring prompt issue identification and resolution. - **Cloud Security & Best Practices**: Apply cloud and security best practices, including configuring networking, encryption, secrets management, and identity/access management. - **Continuous Improvement**: Contribute innovative ideas and solutions to enhance our DevOps processes. - **AI & Future Tech**: Explore opportunities to incorporate AI into our DevOps processes and contribute towards AI-driven development. Your experience should encompass proficiency in the following technologies and concepts: - **Tech Stack**: Terraform, Terragrunt, Helm, Python, Bash, AWS (EKS, Lambda, EC2, RDS/Aurora), Linux OS, and Github Actions. - **Strong Expertise**: Hands-on experience with Terraform, IaC principles, CI/CD, and the AWS ecosystem. - **Networking & Cloud Configuration**: Proven experience with Networking (VPC, Subnets, Security Groups, API Gateway, Load Balancing, WAF) and Cloud configuration (Secrets Manager, IAM, KMS). - **Kubernetes & Deployment Strategies**: Comfortable with Kubernetes, ArgoCD, Istio, and deployment strategies like blue/green and canary. - **Cloud Security Services**: Familiarity with Cloud Security services such as Security Hub, Guard Duty, Inspector, and vulnerability observability. - **Observability Mindset**: Strong belief in measuring everything and utilizing tools like DataDog for platform health and security visibility. - **AI Integration**: Experience with embedding AI into DevOps processes is considered advantageous. This role presents an exciting opportunity to contribute to cutting-edge projects, collaborate with a forward-thinking team, and drive innovation in the realm of DevOps engineering.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a DevOps Engineer, you will play a key role in building and maintaining a robust, scalable, and reliable 0-downtime platform. You will work hands-on with a recently kick-started greenfield initiative with modern infrastructure and automation tools to support our engineering teams. This is a great opportunity to work with a forward-thinking team and have the freedom to approach problems with fresh thinking, embedding AI and automation to help shape our cloud-native journey. If you are passionate about automation, cloud infrastructure, and delivering high-quality production-grade platforms, this role offers the chance to make a real impact. Key Responsibilities Hands-On Development: Design, implement, and optimize AWS infrastructure through hands-on development using Infrastructure as Code tools. Automation & CI/CD: Develop and maintain CI/CD pipelines to automate fast, secure, and seamless deployments. Platform Reliability: Ensure high availability, scalability, and resilience of our platform, leveraging managed services. Monitoring & Observability: Implement and manage proactive observability using DataDog and other tools to monitor system health, performance, and security, ensuring that we can see and fix issues before they impact users. Cloud Security & Best Practices: Apply cloud and security best practices, including patching and secure configuration of networking, encryption (at rest and in transit), secrets, and identity/access management. Continuous Improvement: Contribute ideas and solutions to improve our DevOps processes. AI & Future Tech: We aim to push the boundaries of AI-driven development. If you have ideas on how to embed AI into our DevOps processes, you will have the space to explore them. Your Experience Tech stack: We use Terraform, Terragrunt, Helm, Python, Bash, AWS (EKS, Lambda, EC2, RDS/Aurora), Linux OS & Github Actions. You are comfortable with all of these and have strong hands-on experience with Terraform and IaC principles, CI/CD, and the AWS ecosystem. Proven experience with Networking (VPC, Subnets, Security Groups, API Gateway, Load Balancing, WAF) and Cloud configuration (Secrets Manager, IAM, KMS). Comfortable with Kubernetes, ArgoCD, Isitio & Deployment strategies (blue/green & canary). Familiarity with Cloud Security services such as Security Hub, Guard Duty, Inspector, and vulnerability management/patching. Observability Mindset: You believe in measuring everything. You have worked with DataDog (or similar) to ensure teams have visibility into platform health and security. Experience with embedding AI into DevOps processes is advantageous.,

Posted 2 weeks ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Gurugram

Work from Office

Company Overview Incedo is a US-based consulting, data science and technology services firm with over 3000 people helping clients from our six offices across US, Mexico and India. We help our clients achieve competitive advantage through end-to-end digital transformation. Our uniqueness lies in bringing together strong engineering, data science, and design capabilities coupled with deep domain understanding. We combine services and products to maximize business impact for our clients in telecom, Banking, Wealth Management, product engineering and life science & healthcare industries. Working at Incedo will provide you an opportunity to work with industry leading client organizations, deep technology and domain experts, and global teams. Incedo University, our learning platform, provides ample learning opportunities starting with a structured onboarding program and carrying throughout various stages of your career. A variety of fun activities is also an integral part of our friendly work environment. Our flexible career paths allow you to grow into a program manager, a technical architect or a domain expert based on your skills and interests. Our Mission is to enable our clients to maximize business impact from technology by Harnessing the transformational impact of emerging technologies Bridging the gap between business and technology Role Description Write and maintain build/deploy scripts. Work with the Sr. Systems Administrator to deploy and implement new cloud infra structure and designs. Manage existing AWS deployments and infrastructure. Build scalable, secure, and cost-optimized AWS architecture. Ensure best practices are followed and implemented. Assist in deployment and operation of security tools and monitoring. Automate tasks where appropriate to enhance response times to issues and tickets. Collaborate with Cross-Functional Teams: Work closely with development, operations, and security teams to ensure a cohesive approach to infrastructure and application security. Participate in regular security reviews and planning sessions. Incident Response and Recovery: Participate in incident response planning and execution, including post-mortem analysis and preventive measures implementation. Continuous Improvement: Regularly review and update security practices and procedures to adapt to the evolving threat landscape. Analyze and remediate vulnerabilities and advise developers of vulnerabilities requiring updates to code. Create/Maintain documentation and diagrams for application/security and network configurations. Ensure systems are monitored using monitoring tools such as Datadog and issues are logged and reported to required parties. Technical Skills Experience with system administration, provisioning and managing cloud infrastructure and security monitoring In-depth. Experience with infrastructure/security monitoring and operation of a product or service. Experience with containerization and orchestration such as Docker, Kubernetes/EKS Hands on experience creating system architectures and leading architecture discussions at a team or multi-team level. Understand how to model system infrastructure in the cloud with Amazon Web Services (AWS), AWS CloudFormation, or Terraform. Strong knowledge of cloud infrastructure (AWS preferred) services like Lambda, Cognito, SQS, KMS, S3, Step Functions, Glue/Spark, CloudWatch, Secrets Manager, Simple Email Service, CloudFront Familiarity with coding, scripting and testing tools. (preferred) Strong interpersonal, coordination and multi-tasking skills Ability to function both independently and collaboratively as part of a team to achieve desired results. Aptitude to pick up new concepts and technology rapidly; ability to explain it to both business & tech stakeholders. Ability to adapt and succeed in a fast-paced, dynamic startup environment. Experience with Nessus and other related infosec tooling Nice-to-have skills Strong interpersonal, coordination and multi-tasking skills Ability to work independently and follow through to achieve desired results. Quick learner, with the ability to work calmly under pressure and with tight deadlines. Ability to adapt and succeed in a fast-paced, dynamic startup environment Qualifications BA/BS degree in Computer Science, Computer Engineering, or related field; MS degree in Computer Science or Computer Engineering ( preferred) Company Value

Posted 2 weeks ago

Apply

6.0 - 8.0 years

18 - 30 Lacs

Pune

Hybrid

Key Skills: Cloud API development, AWS API Gateway, Lambda, Python, JSON, SQL (Oracle, PostgreSQL, MariaDB), Appian, SAIL, MuleSoft, CI/CD, DevOps, Agile, JIRA, security compliance, technical documentation, stakeholder collaboration. Roles & Responsibilities: Collaborate with architects, developers, and project managers to deliver scalable and compliant solutions aligned with business needs. Research, evaluate, and implement new infrastructure technologies in line with standards and governance. Provide technical consultancy and mentoring to team members on emerging technologies. Ensure all deliverables meet high-quality standards, compliance policies, and best practices. Prepare and maintain project-related documentation, ensuring adherence to audit and compliance requirements. Offer expert support to developers and business users on secure access and API-related queries. Work closely with global and regional stakeholders on mandatory, regulatory, and development projects. Guarantee system compliance with infrastructure and security policies. Develop APIs that interact with databases and cloud storage while implementing appropriate security controls. Design and deliver APIs in the cloud through managed services such as AWS API Gateway. Support the design and development of integrations with third-party systems. Implement DevOps practices and CI/CD pipelines using modern tools and frameworks. Follow Agile Scrum methodology and actively participate in sprints using tools like JIRA. Experience Requirement: 6-10 yeras of experience in designing, developing, and delivering APIs in cloud environments using AWS API Gateway and Lambda functions. Strong hands-on experience with Python, JSON, Oracle SQL, PostgreSQL, MariaDB, and API integration tools such as MuleSoft. Experience with SAIL and Appian development is highly preferred. Background in designing and implementing CI/CD pipelines and applying DevOps principles. Knowledge of Agile Scrum methodology and experience in multi-cultural, global team environments. Ability to independently manage priorities under pressure, ensuring quality results on complex projects. Strong communication and analytical skills with the ability to convey technical solutions to diverse stakeholders. Education: B.Tech M.Tech (Dual), B.Tech, M. Tech.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

15 - 22 Lacs

Gurugram

Remote

The details of the position are: Position Details: Job Title : Data Engineer Client : Yum brands Job ID : 1666-1 Location : (Remote) Project Duration : 06 months Contract Job Description: We are seeking a skilled Data Engineer, who is knowledgeable about and loves working with modern data integration frameworks, big data, and cloud technologies. Candidates must also be proficient with data programming languages (e.g., Python and SQL). The Yum! data engineer will build a variety of data pipelines and models to support advanced AI/ML analytics projectswith the intent of elevating the customer experience and driving revenue and profit growth in our restaurants globally. The candidate will work in our office in Gurgaon, India. Key Responsibilities As a data engineer, you will: • Partner with KFC, Pizza Hut, Taco Bell & Habit Burger to build data pipelines to enable best-in-class restaurant technology solutions. • Play a key role in our Data Operations team—developing data solutions responsible for driving Yum! growth. • Design and develop data pipelines—streaming and batch—to move data from point-of-sale, back of house, operational platforms, and more to our Global Data Hub • Contribute to standardizing and developing a framework to extend these pipelines across brands and markets • Develop on the Yum! data platform by building applications using a mix of open-source frameworks (PySpark, Kubernetes, Airflow, etc.) and best in breed SaaS tools (Informatica Cloud, Snowflake, Domo, etc.). • Implement and manage production support processes around data lifecycle, data quality, coding utilities, storage, reporting, and other data integration points. Skills and Qualifications: • Vast background in all things data-related • AWS platform development experience (EKS, S3, API Gateway, Lambda, etc.) • Experience with modern ETL tools such as Informatica, Matillion, or DBT; Informatica CDI is a plus • High level of proficiency with SQL (Snowflake a big plus) • Proficiency with Python for transforming data and automating tasks • Experience with Kafka, Pulsar, or other streaming technologies • Experience orchestrating complex task flows across a variety of technologies • Bachelor’s degree from an accredited institution or relevant experience

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Who We Are:- We are a digitally native company that helps organizations reinvent themselves and unleash their potential. We are the place where innovation, design and engineering meet scale. Globant is 20 years old, NYSE listed public organization with more than 33,000+ employees worldwide working out of 35 countries globally. www.globant.com Job location: Pune/Hyderabad/Bangalore Work Mode: Hybrid Experience: 5 to 10 Years Must have skills are 1) AWS (EC2 & EMR & EKS) 2) RedShift 3) Lambda Functions 4) Glue 5) Python 6) Pyspark 7) SQL 8) Cloud watch 9) No SQL Database - DynamoDB/MongoDB/ OR any We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and managing data pipelines, working with cloud technologies, and optimizing data workflows. You will play a key role in supporting our data-driven initiatives and ensuring the seamless integration and analysis of large datasets. Design Scalable Data Models: Develop and maintain conceptual, logical, and physical data models for structured and semi-structured data in AWS environments. Optimize Data Pipelines: Work closely with data engineers to align data models with AWS-native data pipeline design and ETL best practices. AWS Cloud Data Services: Design and implement data solutions leveraging AWS Redshift, Athena, Glue, S3, Lake Formation, and AWS-native ETL workflows. Design, develop, and maintain scalable data pipelines and ETL processes using AWS services (Glue, Lambda, RedShift). Write efficient, reusable, and maintainable Python and PySpark scripts for data processing and transformation. Optimize SQL queries for performance and scalability. Expertise in writing complex SQL queries and optimizing them for performance. Monitor, troubleshoot, and improve data pipelines for reliability and performance. Focusing on ETL automation using Python and PySpark, responsible for design, build, and maintain efficient data pipelines, ensuring data quality and integrity for various applications.

Posted 2 weeks ago

Apply

8.0 - 9.0 years

12 - 15 Lacs

Kolkata

Remote

Looking for an Associate Tech Lead (Python Full Stack) with strong backend and frontend experience, AWS exposure, and a hands-on approach to microservices, serverless architecture, and security domains. Required Candidate profile Python, Django, ReactJS, Redux, TypeScript, MongoDB, PostgreSQL, AWS, Docker, Jenkins, Git, REST, SOAP, XML, JSON, CI/CD, Identity Mgmt, API Security, Linux, Agile

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad

Work from Office

Job Summary: Looking for a highly experienced and self-driven AWS Cloud Operations Engineer. The ideal candidate will have deep expertise in AWS services, infrastructure automation, monitoring, incident response, and continuous improvement of cloud operations. This role is critical to ensuring the scalability, reliability, and security of our cloud infrastructure. Key Responsibilities: Manage and maintain AWS infrastructure using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Design and implement highly available, fault-tolerant, and secure cloud environments. Automate infrastructure provisioning, configuration management, and deployment processes. Monitor system health, performance, and capacity planning using AWS CloudWatch, Datadog, Prometheus, or other observability tools. Ensure proper incident and problem management including root cause analysis and remediation. Implement and manage CI/CD pipelines for automated deployments and updates. Manage backup, disaster recovery, and business continuity planning in AWS. Collaborate with development, security, and DevOps teams to optimize system operations. Support and enforce security best practices, including IAM policies, encryption, and vulnerability management. Stay updated on AWS best practices and new services and recommend improvements. Required Qualifications: 5+ years of experience in Cloud Operations with a strong focus on AWS. Hands-on experience with core AWS services including EC2, S3, RDS, VPC, IAM, Lambda, ECS/EKS, CloudFront, and Route53. Proficiency in Infrastructure as Code using Terraform, CloudFormation, or CDK. Strong scripting skills in Bash, Python, or PowerShell. Deep understanding of networking concepts, security groups, NACLs, load balancers, DNS, etc. Experience with monitoring and alerting tools such as CloudWatch, Datadog, Grafana, or ELK Stack. Familiarity with CI/CD tools such as Jenkins, GitHub Actions, CodePipeline, or CircleCI. Solid understanding of security best practices and compliance (e.g., HIPAA, SOC2, ISO). Preferred Qualifications: AWS Certifications (e.g., AWS Certified SysOps Administrator, Solutions Architect, DevOps Engineer). Experience with containerization and orchestration (Docker, ECS, EKS, or Kubernetes). Experience with incident management tools like PagerDuty, OpsGenie, or ServiceNow.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As a highly skilled and experienced Full Stack Software Engineer, you will be responsible for designing, creating, testing, and documenting new and innovative software products. With a strong background in full stack development and expertise in technologies like Node.js, React.js, TypeScript, JavaScript, Cypress, MongoDB, Terraform, and AWS services, you will play a key role in the efficient development and deployment of solutions to meet business needs. You will take technical responsibility across all stages of software development, measuring and monitoring applications of project/team standards for software construction, including software security. Leading refinement activities of product features and designing software components using appropriate modelling techniques will be part of your responsibilities. You will work closely with the Tech Lead-Principal Architect to define code standards, provide feedback on code structure, and maintain technical documentation for solutions. Your role will involve troubleshooting and debugging production issues, contributing to all stages of the software development lifecycle, and building efficient, reusable, and reliable code using best practices. You will also collaborate closely with product managers, business analysts, and stakeholders, staying up-to-date with the latest trends and technologies in web development. Specific skill-based deliverables include designing, developing, and maintaining full stack applications, building efficient front-end systems with a focus on performance optimization, integrating user-facing elements using server-side rendering, developing user interfaces with modern technologies, creating scalable backend services, implementing automated testing platforms, and designing high-quality technical solutions using Terraform and AWS services. Qualifications & Skills: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5 years of work experience as a Full Stack Software Engineer or similar role. - Hands-on experience in developing and deploying web applications using technologies like Node.js, React.js, TypeScript, JavaScript, Cypress, MongoDB, Terraform, and AWS. - In-depth understanding of AWS IAM best practices, security protocols, and services like API Gateway, Lambda, and S3. - Proficiency in software development principles, design patterns, RESTful web services, micro-services architecture, and cloud platforms like AWS. - Strong knowledge of Terraform for infrastructure as code, version control systems, CI/CD pipelines, software security, database design, optimization, testing, and incident response. - Experience in working in an AGILE environment is a plus. Abilities & Competencies: - Excellent communication and collaboration skills. - Strategic thinking and problem-solving abilities. - Ability to work effectively in a fast-paced environment and meet deadlines. - Proven leadership and management skills with a commitment to high-quality customer service. - Takes accountability, ownership, and results-oriented approach. - Ability to quickly resolve complex problems, work in an agile environment, and collaborate with distributed teams effectively.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As an AWS Data Engineer at Quest Global, you will be responsible for designing, developing, and maintaining data pipelines while ensuring data quality and integrity within the MedTech industry. Your key responsibilities will include designing scalable data solutions on the AWS cloud platform, developing data pipelines using Databricks and PySpark, collaborating with cross-functional teams to understand data requirements, optimizing data workflows for improved performance, and ensuring data quality through validation and testing processes. To be successful in this role, you should have a Bachelor's degree in Computer Science, Engineering, or a related field, along with at least 6 years of experience as a Data Engineer with expertise in AWS, Databricks, PySpark, and S3. You should possess a strong understanding of data architecture, data modeling, and data warehousing concepts, as well as experience with ETL processes, data integration, and data transformation. Excellent problem-solving skills and the ability to work in a fast-paced environment are also essential. In terms of required skills and experience, you should have experience in implementing Cloud-based analytics solutions in Databricks (AWS) and S3, scripting experience in building data processing pipelines with PySpark, and knowledge of Data Platform and Cloud (AWS) ecosystems. Working experience with AWS Native services such as DynamoDB, Glue, MSK, S3, Athena, CloudWatch, Lambda, and IAM is important, as well as expertise in ETL development, analytics applications development, and data migration. Exposure to all stages of SDLC, strong SQL development skills, and proficiency in Python and PySpark development are also desired. Additionally, experience in writing unit test cases using PyTest or similar tools would be beneficial. If you are a talented AWS Data Engineer looking to make a significant impact in the MedTech industry, we invite you to apply for this exciting opportunity at Quest Global.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Full Stack Developer at our company, you will be involved in working on complex engineering projects, platforms, and marketplaces for our clients by utilizing emerging technologies. You will always stay ahead of the technology curve and receive continuous training to become Polyglots. Your role will involve solving end customer challenges through designing and coding, making end-to-end contributions to technology-oriented development projects, providing solutions in Agile mode, and collaborating with Power Programmers and the Open Source community. You will have the opportunity to work on custom development of new platforms and solutions, large-scale digital platforms, and marketplaces. Additionally, you will be working on complex engineering projects using cloud-native architecture and collaborating with innovative Fortune 500 companies on cutting-edge technologies. Your responsibilities will include co-creating and developing new products and platforms for our clients, contributing to Open Source projects, and continuously upskilling in the latest technology areas. To excel in this role, you should have a minimum of 8 years of overall experience and possess the following skills: - Solid experience in developing serverless applications on the AWS platform. - In-depth knowledge of AWS services such as Lambda, API Gateway, DynamoDB, S3, IAM, CloudFormation, CloudWatch, etc. - Proficiency in React.Js, Node.js, JavaScript, and/or TypeScript, along with experience in AWS Serverless Application Model (SAM). - Experience with serverless deployment strategies, monitoring, and troubleshooting, as well as a solid understanding of RESTful APIs and best practices for designing scalable APIs. - Familiarity with version control systems (e.g., Git) and CI/CD pipelines. - Strong problem-solving and analytical skills with a focus on delivering high-quality code and solutions. - Excellent communication skills to effectively collaborate and communicate technical concepts to both technical and non-technical stakeholders. - Experience with Azure DevOps or similar CI/CD tools and an understanding of Agile/Scrum methodologies in an Agile development environment.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

indore, madhya pradesh

On-site

We are looking for a highly experienced Senior Full Stack Developer with expertise in Java (backend), Flutter (mobile/web), and advanced AWS Cloud services. The ideal candidate should have prior experience in the stock broking or finance domain. Responsibilities: - Design, develop, and maintain scalable financial/trading applications using Java and Flutter. - Architect and deploy solutions on AWS, including EC2, Load Balancer, and VPC, with automated scaling and high availability. - Implement telemetry monitoring solutions for real-time visibility and incident response. - Design and optimize data pipelines using Elasticsearch and RDBMS. - Integrate with external APIs securely and efficiently. - Collaborate with cross-functional teams to deliver high-performance financial systems. - Enforce best coding practices, CI/CD, and unit/integration testing. - Ensure robust security practices for financial transactions and customer data. Required Skills & Experience: - 6 years of full stack development experience in finance or stock broking. - Strong expertise in Java (Spring Boot, REST APIs, Multithreading, JMS). - Advanced Flutter development skills. - Deep hands-on AWS experience including EC2, Load Balancer, Elasticsearch, and telemetry monitoring. - Experience with CI/CD, DevOps, and database management. - Understanding of financial protocols, compliance, and security standards. - Proven experience with high-throughput, low-latency applications. - Strong debugging, optimization, and troubleshooting skills. Preferred Qualifications: - Exposure to event-driven architecture tools like Kafka/RabbitMQ. - Knowledge of microservices, serverless, and API Gateway on AWS. - Experience working with market data vendors. - Prior experience in building real-time dashboards and analytics in fintech. - AWS certification is a plus. Qualification: Graduate in IT/Computer Science Package: As per Industry Standards Location: Indore, MP If you meet the above requirements and are passionate about developing cutting-edge financial applications, we encourage you to apply for this position at Indira Securities Pvt Ltd.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

kozhikode, kerala

On-site

As a Full Stack Engineer with expertise in MERN stack and AI integration, you will primarily be responsible for developing responsive web applications and seamlessly integrating AI pipelines such as LangChain, RAG, and Python APIs. You should have a minimum of 2 years of relevant experience in this field. Your key skills should include proficiency in React.js, Next.js, Node.js, FastAPI, MongoDB/PostgreSQL, and Tailwind CSS. Additionally, experience with AWS services like EC2, Lambda, GitHub Actions, Docker, JWT, CI/CD, and Python integration will be beneficial for this role. In this role, your main responsibilities will involve building customer apps and dashboards, integrating various APIs, handling billing and authentication systems, as well as incorporating AI models into the applications. Furthermore, you will be expected to automate deployment processes and effectively manage the AWS infrastructure to ensure smooth operations.,

Posted 2 weeks ago

Apply

2.0 - 8.0 years

0 Lacs

karnataka

On-site

At PwC, as a member of the cybersecurity team, your main focus will be on protecting organizations from cyber threats using advanced technologies and strategies. You will be responsible for identifying vulnerabilities, developing secure systems, and offering proactive solutions to safeguard sensitive data. Specifically, in the security architecture role at PwC, you will concentrate on designing and implementing robust security frameworks to shield organizations from cyber threats. Your tasks will include developing strategies and solutions to ensure the security of sensitive data and maintain the integrity of systems and networks. In this dynamic environment, your curiosity will drive you to become a reliable and contributing team member. You must be adaptable to working with various clients and team members, each presenting unique challenges and scope. Every experience will serve as an opportunity for learning and personal growth. Taking ownership and consistently delivering high-quality work that adds value for clients and contributes to team success is expected of you. As you progress within the firm, you will establish a reputation for yourself, opening doors to further opportunities. To excel in this role, you should possess a learning mindset and take ownership of your own development. Valuing diverse perspectives, understanding the needs and feelings of others, and adopting habits that sustain high performance are essential. Additionally, actively listening, asking clarifying questions, and effectively expressing ideas are key communication skills. Seeking, reflecting on, acting on, and providing feedback are important practices to embrace. It is crucial to gather information from various sources, analyze facts, and identify patterns. Committing to understanding how the business operates and developing commercial awareness are also vital. Learning and applying professional and technical standards, upholding the firm's code of conduct and independence requirements are integral aspects of this role. With 2-4 years of experience, you should have a background in operations/managed services and possess expertise in multi-cloud infrastructure solutions such as Azure & AWS. Your responsibilities will involve deployment, maintenance, monitoring, and management tasks. Demonstrable experience in implementing and supporting large-scale IT infrastructure environments or large businesses is crucial. Strong technical knowledge in Microsoft, Network & Cloud technology, along with leadership and communication skills, will enable you to enhance service delivery. Effective communication with stakeholders and the ability to gather business requirements and scope relevant solutions are essential. Collaborating with team leaders, setting behavioral and performance standards, energizing your team, and adapting positively to change and uncertainty are key aspects of this role. Preferred technical competencies include a strong technical background in Palo Alto, Cloud Security, Cloud platforms, and NAC (Network Access Control). Knowledge of web application firewall (WAF) and experience in creating, deploying, maintaining, and troubleshooting WAF policies for web applications is highly valued. Understanding data flow technologies, mainstream operating systems, and various security technologies is essential. Additionally, experience with Palo Alto and Prisma Cloud Technologies, planning, configuring, and deploying PA Firewalls, troubleshooting Panorama, Palo Alto firewalls, and managing Prisma cloud solutions will be beneficial. Knowledge of VPN technologies, incident management, change management, and developing scalable network security solutions are key skills required for this role. Furthermore, experience with network security infrastructure, including Illumio Micro/Nano Segmentation, Forescout NAC, and Zscaler, is preferred. Understanding AWS Cloud Networking components and services, and expertise in AWS cloud environments, including VPC, virtual gateway, Route53, Direct Connect Gateway, transit VPC, transit gateway, lambda, endpoints, load balancers, and SIEM dashboard configuration are advantageous. Knowledge of AWS WAF, AWS Load balancers, common OWASP Top Ten Web Application and API vulnerabilities, and complex network architecture is also desirable. In summary, as a member of the cybersecurity team at PwC, you will play a vital role in protecting organizations from cyber threats by implementing robust security frameworks, developing strategies to safeguard sensitive data, and ensuring the integrity of systems and networks. Your passion for learning, adaptability, strong technical skills, and collaborative approach will contribute to the success of the team and add value to our clients" businesses.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

jaipur, rajasthan

On-site

As an AI/ML Engineer (Python) at Telepathy Infotech, you will be responsible for building and deploying machine learning and GenAI applications in real-world scenarios. You will be part of a passionate team of technologists working on innovative digital solutions for clients across industries. We value continuous learning, ownership, and collaboration in our work culture. To excel in this role, you should have strong Python skills and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Experience in GenAI development using APIs such as Google Gemini, Hugging Face, Grok, etc. is highly desirable. A solid understanding of ML, DL, NLP, and LLM concepts is essential along with hands-on experience in Docker, Kubernetes, and CI/CD pipeline creation. Familiarity with Streamlit, Flask, FastAPI, MySQL/PostgreSQL, AWS services (EC2, Lambda, RDS, S3, API Gateway), LangGraph, serverless architectures, and vector databases like FAISS, Pinecone, will be advantageous. Proficiency in version control using Git is also required. Ideally, you should have a B.Tech/M.Tech/MCA degree in Computer Science, Data Science, AI, or a related field with at least 1-5 years of relevant experience or a strong project/internship background in AI/ML. Strong communication skills, problem-solving abilities, self-motivation, and a willingness to learn emerging technologies are key qualities we are looking for in candidates. Working at Telepathy Infotech will provide you with the opportunity to contribute to impactful AI/ML and GenAI solutions while collaborating in a tech-driven and agile work environment. You will have the chance to grow your career in one of India's fastest-growing tech companies with a transparent and supportive company culture. To apply for this position, please send your CV to hr@telepathyinfotech.com or contact us at +91-8890559306 for any queries. Join us on our journey of innovation and growth in the field of AI and ML at Telepathy Infotech.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies