Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: As a Backend Developer at Capgemini, your role will involve developing and maintaining backend services using Node.js. You will design and implement RESTful APIs, integrating them with front-end applications. Utilizing AWS services such as Lambda, API Gateway, DynamoDB, S3, EC2, and RDS will be a key aspect of your responsibilities. You will create and manage serverless workflows using Step Functions, optimizing applications for performance, scalability, and cost-efficiency. Implementing error handling, logging, and monitoring using tools like Splunk will also be part of your tasks. Collaboration with front-end developers, DevOps, and product teams, as well as participation in Agile ceremonies and contributing to sprint planning and retrospectives, will be essential. Writing clean, efficient, and well-documented code, maintaining technical documentation, and participating in code reviews will also be part of your day-to-day activities. Key Responsibilities: - Develop and maintain backend services using Node.js - Design and implement RESTful APIs and integrate with front-end applications - Utilize AWS services such as Lambda, API Gateway, DynamoDB, S3, EC2, and RDS - Create and manage serverless workflows using Step Functions - Optimize applications for performance, scalability, and cost-efficiency - Implement error handling, logging, and monitoring using tools like Splunk - Collaborate with front-end developers, DevOps, and product teams - Participate in Agile ceremonies and contribute to sprint planning and retrospectives - Write clean, efficient, and well-documented code - Maintain technical documentation and participate in code reviews Qualifications Required: - 3+ years of experience with Node.js and JavaScript/TypeScript - Strong proficiency in AWS services: Lambda, API Gateway, DynamoDB, S3, Step Functions, EventBridge, SQS - Experience with Terraform or other Infrastructure-as-Code tools - Solid understanding of serverless architecture and cloud-native design patterns - Familiarity with CI/CD pipelines (e.g., GitHub Actions) - Experience with version control systems like Git - Strong problem-solving skills and attention to detail (Note: Additional details about the company were not included in the provided job description),
Posted 23 hours ago
3.0 - 7.0 years
3 - 7 Lacs
cochin, kerala, india
On-site
We are seeking talented AWS AI Engineers / Developers to join our team in building innovative agentic AI solutions. This role focuses on implementing intelligent, automated workflows that enhance clinical data processing and decision-making, leveraging the power of AWS services and cutting-edge open-source AI frameworks. Experience in healthcare or Life Sciences, with a strong focus on regulatory compliance, is highly desirable. Key Responsibilities: Design, develop, and implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation to enhance data quality and reliability. Build and integrate LLM-powered agents using AWS Bedrock in combination with open-source frameworks like LangChain and AutoGen. Create robust, event-driven pipelines leveraging AWS Lambda, Step Functions, and EventBridge for seamless orchestration of data and processes. Optimize prompt engineering techniques, implement retrieval-augmented generation (RAG), and facilitate efficient multi-agent communication. Integrate AI agents securely with external applications and services through well-defined, secure APIs. Collaborate with Data Engineering teams to design and maintain PHI/PII-safe data ingestion pipelines, ensuring compliance with privacy regulations. Continuously monitor, test, and fine-tune AI workflows focusing on improving accuracy, reducing latency, and ensuring compliance with industry standards. Document solutions and contribute to the establishment of best practices and governance frameworks. Required Qualifications: Bachelor's degree in Computer Science, Engineering, or a related technical field. 36 years of hands-on experience in AI/ML engineering, with specific expertise in LLM-based or agentic AI development and deployment. Strong programming skills in Python and/or TypeScript. Practical experience with agentic AI frameworks such as LangChain, LlamaIndex, or AutoGen. Solid understanding of AWS AI services including Bedrock, SageMaker, Textract, and Comprehend Medical. Proven experience in API development and integration, as well as designing event-driven system architectures. Knowledge of healthcare or Life Sciences domains, including regulatory compliance requirements (HIPAA, GDPR, etc.), is preferred. Strong problem-solving mindset, with a focus on experimentation, iteration, and delivering innovative solutions rapidly. Preferred Skills: Effective communication and collaboration skills, with the ability to work closely with cross-functional teams. Passion for emerging AI technologies and cloud innovations. Prior exposure to clinical or life sciences data workflows is a strong advantage.
Posted 1 day ago
3.0 - 7.0 years
4 - 7 Lacs
cochin, kerala, india
On-site
We are seeking skilled Machine Learning Engineers to design, develop, and deploy advanced ML models focused on Agentic AI use cases, utilizing the AWS ecosystem. The role emphasizes end-to-end ML solutions, from data preprocessing and model development to deployment and ongoing monitoring, with a strong focus on scalability, performance, and cost-efficiency in production environments. Key Responsibilities: Design, develop, and deploy machine learning models tailored for Agentic AI applications in clinical and enterprise domains. Work extensively with the AWS AI/ML ecosystem, including SageMaker, Bedrock, Lambda, Step Functions, S3, DynamoDB, and Kinesis, to build scalable solutions. Perform data preprocessing and feature engineering on structured, unstructured, and streaming data to build high-quality training datasets. Collaborate with Data Engineering teams to ensure robust, well-curated datasets while maintaining PHI/PII safety and compliance. Implement fine-tuning of large language models (LLMs), embeddings, and retrieval-augmented generation (RAG) pipelines. Evaluate and optimize models focusing on accuracy, performance, scalability, and operational cost-effectiveness. Integrate ML models into production applications and expose them via APIs for seamless consumption. Work closely with MLOps teams to automate workflows for model training, testing, deployment, and continuous monitoring. Conduct experimentation, A/B testing, and rigorous model validation to guarantee performance and reliability. Document experiments, data pipelines, model architectures, and best practices to ensure reproducibility and knowledge sharing. Required Skills & Qualifications: 3+ years of hands-on experience in machine learning engineering, model development, and production deployment. Strong proficiency in Python, with expertise in libraries such as NumPy, Pandas, Scikit-learn, PyTorch, and TensorFlow. Solid understanding of the full ML lifecycle, including data preprocessing, model training, evaluation, deployment, and monitoring. Experience working with AWS services for machine learning: SageMaker, Lambda, ECS/EKS, Step Functions, Bedrock, S3, and DynamoDB. Practical knowledge of large language models (LLMs), natural language processing (NLP), and vector embeddings. Proficient in developing APIs and deploying ML models as microservices in production environments. Experience in orchestrating ML pipelines using tools such as Airflow, Kubeflow, or MLflow. Familiarity with data versioning, experiment tracking, and model registry best practices. Strong SQL and NoSQL database skills, including experience with vector databases like Weaviate, Pinecone, or FAISS. Preferred Attributes: Experience in healthcare or regulated domains, ensuring compliance with industry standards. Excellent problem-solving skills, with a passion for experimentation, iteration, and delivering innovative AI solutions. Strong communication skills, capable of collaborating with cross-functional teams. Self-driven, with a focus on automating and improving ML workflows and processes.
Posted 1 day ago
3.0 - 7.0 years
4 - 7 Lacs
cochin, kerala, india
On-site
Job Overview: We are looking for a QA Engineer to ensure the quality and reliability of AI-powered applications running on AWS. The role involves testing data pipelines, AI/ML workflows, and cloud-native applications, while also developing automated test frameworks and supporting continuous integration. Key Responsibilities: Develop and execute test plans, test cases, and automation scripts for AI applications on AWS. Test data pipelines (ETL/ELT, streaming, batch) to ensure data is correct, complete, and performs well. Validate AI/ML workflows including model training, inference, fine-tuning, and retraining processes. Perform functional, regression, integration, performance, and security testing of cloud applications. Build automated test frameworks for APIs, microservices, and AWS Lambda functions. Integrate testing into CI/CD pipelines to enable continuous testing. Work closely with data engineers, ML engineers, and architects to ensure overall system quality. Monitor production systems and check that results meet expected business or AI outcomes. Document test results, report defects, and suggest process improvements. Required Skills & Qualifications: 3+ years of QA or software testing experience. Experience in both manual and automated testing practices. Good programming skills in Python, Java, or JavaScript for automation purposes. Experience testing cloud-native applications using AWS services like S3, Lambda, Step Functions, API Gateway, ECS, and DynamoDB. Familiar with CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or AWS CodePipeline. Knowledge of data validation and quality frameworks (e.g., Great Expectations, Deequ).
Posted 1 day ago
2.0 - 7.0 years
9 - 15 Lacs
bengaluru
Hybrid
Urgent Hiring in Genpact for Senior Data Engineer Location: Bangalore, Genpact office Prestige Park Shift Timings- 12 PM to 10 PM IST Work Mode: Hybrid Permanent role Looking for candidate with immediate to 30 days' notice period Title: Senior Data Engineer AWS | ETL | SQL | Python Job Description: We are looking for a Senior Data Engineer (2+ years with strong expertise in AWS cloud technologies, ETL pipelines, SQL optimization, and Python scripting . The role involves building and maintaining scalable data pipelines while leveraging AWS services to enable seamless data transformation and accessibility. Key Responsibilities: Build and optimize ETL pipelines using AWS Step Functions & AWS Lambda . Develop scalable data workflows leveraging AWS tools and SQL. Write efficient SQL queries for large-scale datasets. Collaborate with cross-functional teams to meet business data needs. Integrate APIs to enrich and expand datasets. Deploy and manage containerized applications (Docker/Kubernetes). Mandatory Skills: 2+ years of Data Engineering experience. Hands-on expertise in AWS, ETL, SQL, Python . Strong working knowledge of AWS Step Functions & AWS Lambda . Preferred Skills: AWS Glue, Git/CI-CD, REST/SOAP APIs, Docker/Kubernetes. PySpark, Snowflake, Linux.
Posted 2 days ago
8.0 - 10.0 years
0 Lacs
pune, maharashtra, india
On-site
KONE Technology and Innovation Unit (KTI) is where the magic happens at KONE. It's where we combine the physical world - escalators and elevators - with smart and connected digital systems. We are changing and improving the way billions of people move within buildings every day. We are on a mission to expand and develop new digital solutions that are based on emerging technologies. KONE's vision is to create the Best People Flow experience by providing ease, effectiveness and experiences to our customers and users. In line with our strategy, Sustainable Success with Customers, we will focus on increasing the value we create for customers with new intelligent solutions and embed sustainability even deeper across all of our operations. By closer collaboration with customers and partners, KONE will increase the speed of bringing new services and solutions to the market. R&D unit in KTI is responsible for developing digital services at KONE. It's the development engine for our Digital Services such as , and . We are looking for Cloud Automation Architect with strong expertise in Automation on AWS cloud, UI, API, Data and ML Ops . The ideal candidate will bring hands-on technical leadership, architect scalable automation solutions, and drive end-to-end solution design for enterprise-grade use cases. You will collaborate with cross-functional teams including developers, DevOps engineers, product owners, and business stakeholders to deliver automation-first solutions. Role description: Solution Architecture & Design Architect and design automation solutions leveraging Cloud services and data management Define E2E architecture spanning cloud infrastructure, APIs, UI, and visualization layers Translate business needs into scalable, secure, and cost-effective technical solutions. Automation on Cloud Lead automation initiatives across infrastructure, application workflows, and data pipelines Implement operations use cases using Data Visualization and Cloud Automation Optimize automation for cost, performance, and security UI & API Integration Design and oversee development of APIs and microservices to support automation Guide teams on UI frameworks (React/Angular) for building dashboards and portals Ensure seamless integration between APIs, front-end applications, OCR and cloud services Data & ML Ops Define architecture for data ingestion, transformation, and visualization on AWS. Work with tools like Amazon QuickSight, Power BI to enable business insights Establish ML Ops best practices for data-driven decision-making Architect and implement end-to-end MLOps pipelines for training, deployment, and monitoring ML models. Use AWS services like SageMaker, Step Functions, Lambda, Kinesis, Glue, S3, Redshift for ML workflows. Establish best practices for model versioning, reproducibility, CI/CD for ML, and monitoring model drift. Team leading and Collaboration Mentor engineering teams on cloud-native automation practices Collaborate with product owners to prioritize and align technical solutions with business outcomes Drive POCs and innovation initiatives for automation at scale Requirements: 8-10 years of experience in cloud architecture, automation, and solution design Deep expertise in Python for automation Use cases and understanding of ML Ops Experience with data engineering & visualization tools Knowledge of UI frameworks (React, Angular, Vue) for portals and dashboards Expertise in AWS Cloud services for compute, data, and ML workloads Strong understanding of security, IAM, compliance, and networking in AWS Hands-on experience with MLOps pipelines (model training, deployment, monitoring). Read more on
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking - Consumer Card Technology, you will play a vital role in an agile team, contributing to the design and delivery of innovative technology products. Your work will focus on developing critical technology solutions that align with the firm's business goals in a secure, stable, and scalable manner. **Key Responsibilities:** - Execute software solutions, design, development, and technical troubleshooting with a focus on innovative problem-solving approaches. - Create secure and high-quality production code, ensuring synchronous operation with relevant systems. - Produce architecture and design artifacts for complex applications while ensuring adherence to design constraints. - Analyze and synthesize data sets to develop visualizations and reporting for continuous improvement of software applications and systems. - Identify hidden problems and patterns in data to drive enhancements in coding hygiene and system architecture. - Contribute to software engineering communities of practice and participate in events exploring new technologies. **Qualification Required:** - Formal training or certification in software engineering concepts with a minimum of 3 years of practical experience. - Hands-on experience in system design, application development, testing, and operational stability. - Proficiency in modern programming languages, database querying languages, and experience in a large corporate environment. - Ability to analyze complex systems, conduct failure analysis, and develop technical engineering documentation. - Expertise in DevOps processes, service-oriented architecture, web services/APIs, and modern software languages. - Experience in Site Reliability Engineering (SRE) practices and proficiency in CI/CD tools. - Strong knowledge in Python and integrating with Python-based applications. - Understanding and hands-on experience with various AWS services for performance and cost optimization. The company is looking for individuals who have a strong understanding of modern front-end technologies, exposure to cloud technologies, and knowledge in Analytical App development. Additionally, experience as a technical lead/mentor, strong communication skills, and the ability to work independently on design and functionality challenges are highly valued.,
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
Role Overview: As a Software Engineering Lead at CGI, you will play a crucial role in the Data and Analytics organization by actively participating in the development of initiatives that align with CGI's strategic goals. Your primary responsibility will involve understanding business logic and engineering solutions to support next-generation reporting and analytical capabilities on an enterprise-wide scale. Working in an agile environment, you will collaborate with your team to deliver user-oriented products for internal and external stakeholders. Key Responsibilities: - Be accountable for the delivery of business functionality. - Work on AWS cloud for migrating/re-engineering data and applications. - Engineer solutions adhering to enterprise standards and technologies. - Provide technical expertise through hands-on development of solutions for automated testing. - Conduct peer code reviews, merge requests, and production releases. - Implement design and functionality using Agile principles. - Demonstrate a track record of quality software development and innovation. - Collaborate effectively in a high-performing team environment. - Maintain a quality mindset to ensure data quality and monitor for potential issues. - Be entrepreneurial, ask smart questions, and champion new ideas. - Take ownership and accountability for your work. Qualifications Required: - 8-11 years of experience in application program development. - Bachelor's degree in Engineering or Computer Science. - Proficiency in PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. - Experience with AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM. Additional Company Details: At CGI, life is rooted in ownership, teamwork, respect, and belonging. As a CGI Partner, you will have the opportunity to turn meaningful insights into action from day one. You will contribute to innovative solutions, build relationships, and access global capabilities while shaping your career in a supportive environment that prioritizes your growth and well-being. Join CGI's team, one of the largest IT and business consulting firms globally, to make a difference in the world of technology and consulting.,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
You are being sought after to take on the role of a Senior Data Engineer, where your primary responsibility will be leading the development of a scalable data ingestion framework. In this capacity, you will play a crucial role in ensuring high data quality and validation, as well as designing and implementing robust APIs for seamless data integration. Your expertise in building and managing big data pipelines using modern AWS-based technologies will be put to good use, making sure that quality and efficiency are at the forefront of data processing systems. Key Responsibilities: - **Data Ingestion Framework**: - Design & Development: You will architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. - Framework Optimization: Utilize AWS services such as AWS Glue, Lambda, EMR, ECS, EC2, and Step Functions to build highly scalable, resilient, and automated data pipelines. - **Data Quality & Validation**: - Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. - Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. - **API Development**: - Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. - Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. - **Collaboration & Agile Practices**: - Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. - Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development. Required Qualifications: - **Experience & Technical Skills**: - Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. - Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. - AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. - Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. - API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. - CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. - **Soft Skills**: - Strong problem-solving abilities and attention to detail. - Excellent communication and interpersonal skills with the ability to work independently and collaboratively. - Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications: - Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. - Experience with additional AWS services such as Kinesis, Firehose, and SQS. - Familiarity with data lakehouse architectures and modern data quality frameworks. - Prior experience in a role that required proactive data quality management and API-driven integrations in complex, multi-cluster environments. Please note that the job is based in Kochi and Thiruvananthapuram, and only local candidates are eligible to apply. This is a full-time position that requires in-person work. Experience Required: - AWS: 7 years - Python: 7 years - PySpark: 7 years - ETL: 7 years - CI/CD: 7 years Location: Kochi, Kerala,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You're ready to gain the skills and experience needed to grow within your role and advance your career and we have the perfect software engineering opportunity for you. As a Software Engineer II at JPMorgan Chase within the Consumer and community banking, you are part of an agile team that works to enhance, design, and deliver the software components of the firm's state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. - Executes software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems - Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems - Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development - Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture - Contributes to software engineering communities of practice and events that explore new and emerging technologies - Formal training or certification on software engineering concepts and 2+ years applied experience - Hands-on practical experience in system design, application development, testing, and operational stability - Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages - Ability to analyze complex systems, conduct failure analysis/root cause analysis, and develop and maintain technical engineering documentation and experience with highly scalable systems, release management, software configuration, design, development, and implementation - Expertise in DevOps processes within a Cloud/SaaS environment (AWS), service-oriented architecture, web services/API, and modern software languages, with familiarity in Agile and lean methodologies - Experience in Site Reliability Engineering (SRE) practices, including monitoring, incident response, and system reliability with proficiency in CI/CD tools such as Jenkins, GitHub, Terraform, Container Registry, etc. - Hands-on experience and strong knowledge in Python and integrating with Python-based applications - Strong Understanding and hands-on experience on AWS Lambda, Glue, IAM, KMS, API Gateways, SNS, SQS, Step functions, Event Bridge, EC2, ECS, Load Balancers & Skills in performance and cost optimization of AWS services - Knowledge of Analytical App development: REACT / JavaScript / CSS - Banking domain expertise is a nice-to-have - Demonstrated knowledge of software applications and technical processes within a technical discipline e.g. cloud services - Must have experience working in a team, and the ability to tackle design and functionality problems independently with little to no oversight - Strong written and oral communication skills; ability to communicate effectively with all levels of management and partners from a variety of business functions,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. **Role Overview:** - Develop and maintain backend services using Node.js. - Design and implement RESTful APIs and integrate with front-end applications. - Utilize AWS services such as Lambda, API Gateway, DynamoDB, S3, EC2, and RDS. - Create and manage serverless workflows using Step Functions. - Optimize applications for performance, scalability, and cost-efficiency. - Implement error handling, logging, and monitoring using tools like Splunk. - Collaborate with front-end developers, DevOps, and product teams. - Participate in Agile ceremonies and contribute to sprint planning and retrospectives. - Write clean, efficient, and well-documented code. - Maintain technical documentation and participate in code reviews. **Qualification Required:** - 3+ years of experience with Node.js and JavaScript/TypeScript. - Strong proficiency in AWS services: Lambda, API Gateway, DynamoDB, S3, Step Functions, EventBridge, SQS. - Experience with Terraform or other Infrastructure-as-Code tools. - Solid understanding of serverless architecture and cloud-native design patterns. - Familiarity with CI/CD pipelines (e.g., GitHub Actions). - Experience with version control systems like Git. - Strong problem-solving skills and attention to detail. **Additional Company Details:** Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. With a strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Capgemini is a responsible and diverse group of 340,000 team members in more than 50 countries.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You have an exciting opportunity as a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking - Consumer Card Technology. You will be a key member of an agile team, designing and delivering innovative technology products. This role offers the chance to work on critical technology solutions that support the firm's business objectives in a secure, stable, and scalable manner. **Key Responsibilities:** - Executes software solutions, design, development, and technical troubleshooting with the ability to think beyond routine approaches - Creates secure and high-quality production code and maintains algorithms - Produces architecture and design artifacts for complex applications - Gathers, analyzes, synthesizes, and develops visualizations and reporting from large data sets - Identifies hidden problems and patterns in data to drive improvements - Contributes to software engineering communities of practice and events exploring new technologies **Qualifications Required:** - Formal training or certification on software engineering concepts with 3+ years applied experience - Hands-on practical experience in system design, application development, testing, and operational stability - Experience in developing, debugging, and maintaining code with modern programming languages - Ability to analyze complex systems, conduct failure analysis, and develop technical engineering documentation - Expertise in DevOps processes, service-oriented architecture, web services/API, and modern software languages - Experience in Site Reliability Engineering (SRE) practices and proficiency in CI/CD tools - Hands-on experience and strong knowledge in Python and integrating with Python-based applications - Strong understanding and hands-on experience of various AWS services The company is looking for someone who has familiarity with modern front-end technologies, exposure to cloud technologies, knowledge in Analytical App development, experience as a technical lead/mentor, and strong communication skills to communicate effectively with all levels of management and partners.,
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a highly skilled and visionary Technical Lead with 6-8 years of working experience on AWS & GenAI, you will be spearheading next-generation AI initiatives. Your role will involve deep expertise in Large Language Models (LLMs) integration, cloud-native architecture, and hands-on leadership in delivering scalable GenAI solutions on cloud platforms. You will lead multiple GenAI projects, driving innovation from concept through deployment, ensuring robust, scalable, and efficient AI-powered applications. **Key Responsibilities:** - Lead the design, development, and deployment of GenAI projects leveraging Large Language Models (LLMs) on cloud platforms, primarily AWS. - Architect and implement scalable, event-driven cloud-native solutions using AWS Lambda, SNS/SQS, Step Functions, and related services. - Drive prompt engineering strategies including prompt versioning, optimization, and chain-of-thought design to enhance model performance. - Oversee fine-tuning, embeddings, and conversational flow development to build intelligent, context-aware AI applications. - Integrate and manage LLM APIs such as Azure OpenAI and other emerging models to deliver cutting-edge AI capabilities. - Utilize vector databases (ElasticSearch, Open Search, or similar) for embedding-based search and Retrieval-Augmented Generation (RAG). - Lead model evaluation efforts including output analysis, performance tuning, and quality benchmarking to ensure high standards. - Collaborate with cross-functional teams including data scientists, engineers, and product managers to align AI solutions with business goals. - Mentor and guide technical teams on best practices in GenAI development and cloud architecture. - Stay abreast of the latest advancements in AI, cloud technologies, and LLM frameworks to continuously innovate and improve solutions. **Skills & Experience:** - Proven hands-on experience integrating Large Language Models (LLMs) into real-world applications. - Strong programming skills in Python and experience with web frameworks such as FastAPI or Flask. - Expertise in cloud architecture, particularly AWS services including Lambda, SNS/SQS, Step Functions, and event-driven design patterns. - Deep knowledge of LLM APIs, with mandatory experience in Azure OpenAI; familiarity with other models like LLaMA is a plus. - Experience working with vector databases such as ElasticSearch, Open Search, or other vector search engines. - Proficiency in prompt engineering techniques including prompt versioning, optimization, and chain-of-thought design. - Familiarity with LLM frameworks such as LangChain and LlamaIndex. - Experience with embedding-based search and Retrieval-Augmented Generation (RAG) methodologies. - Strong skills in model evaluation, including output analysis, performance tuning, and quality benchmarking. - Excellent leadership, communication, and project management skills.,
Posted 6 days ago
9.0 - 14.0 years
20 - 30 Lacs
bengaluru
Hybrid
My profile :- linkedin.com/in/yashsharma1608 Hiring manager profile :- on payroll of - https://www.nyxtech.in/ Clinet : Brillio PAYROLL AWS Architect Primary skills Aws (Redshift, Glue, Lambda, ETL and Aurora), advance SQL and Python , Pyspark Note : -Aurora Database mandatory skill Experience 9 + yrs Notice period Immediate joiner Location Any Brillio location (Preferred is Bangalore) Budget – 30 LPA Job Description: year of IT experiences with deep expertise in S3, Redshift, Aurora, Glue and Lambda services. Atleast one instance of proven experience in developing Data platform end to end using AWS Hands-on programming experience with Data Frames, Python, and unit testing the python as well as Glue code. Experience in orchestrating mechanisms like Airflow, Step functions etc. Experience working on AWS redshift is Mandatory. Must have experience writing stored procedures, understanding of Redshift data API and writing federated queries Experience in Redshift performance tunning.Good in communication and problem solving. Very good stakeholder communication and management
Posted 6 days ago
5.0 - 7.0 years
0 Lacs
pune, maharashtra, india
On-site
Job Description Software Development (5+ years): Proficient in Python and/or Node.js strong REST API design and implementation skills. AWS Serverless: Deep experience with AWS Lambda, Step Functions, API Gateway (or Lambda Function URLs), DynamoDB/S3, and IAM. Container & Kubernetes: Hands-on with Docker and EKS (or ECS) comfortable with Helm, K8s manifests, and cluster autoscaling. MCP or Agent Frameworks: Familiarity with the Model Context Protocol spec or similar LLM tool-invocation patterns experience with Bedrock Agents or LangChain is a plus. Infrastructure as Code: Expert in Terraform, AWS CDK, or CloudFormation for defining and deploying AWS infrastructure. Security Best Practices: Strong understanding of VPC networking, PrivateLink, security groups, WAF, Secrets Manager, and implementing least-privilege IAM policies. Monitoring & Observability: Experience with CloudWatch (logs, dashboards, alarms), Kibana, or open-source tooling (Prometheus, Grafana). CI/CD & DevOps: Proficient setting up pipelines (GitHub Actions, Azure DevOps, or AWS CodePipeline) for automated testing and deployments. Prompt Engineering Collaboration: Ability to translate AI/ML requirements into clear tool definitions and assist in iterative prompt tuning. Primary Skill Python Development Gen AI AWS Secondary Skills AWS AI services (Opensearch, Sagemaker, Kendra) regulatory frameworks (GDPR, HIPAA) service mesh (AWS App Mesh, Istio)
Posted 6 days ago
5.0 - 8.0 years
15 - 22 Lacs
gurugram
Remote
Role Characteristics: Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner. You Will: Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering) Derive insights and putting them together to build a story to solve a given problem Suggest ways for process improvements in terms of script optimization, automating repetitive tasks Create and automate reports and dashboards through Python to track certain metrics basis given requirements Automate reports and dashboards through Python Technical Skills (Must have) B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields 4-6 years of relevant experience in working with data and conducting statistical and/or numerical analysis Ability to write SQL code Scripting/automation using python Hands on experience in data visualisation tool like Looker/Tableau/Quicksight Basic to advance level understanding of statistics Other Skills (Must have) Be willing and able to quickly learn about new businesses, database technologies and analysis techniques Strong oral and written communication Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have) Experience working with large datasets Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) Hands on experience on AWS services like lambda, step functions, Glue, EMR + exposure to pyspark What we offer At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Lunch Fully stocked snacks/beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee assistance program Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program) Internet reimbursement Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as VPF and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement NPS employer match Meal card for tax benefit Special benefits on salary account We are an equal opportunity employer and value diversity, inclusion and equity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for a talented Big Data Engineer with over 3 years of experience in developing, enhancing, and managing large-scale data applications on AWS Cloud. The ideal candidate should possess strong skills in Java, Scala, Python, and various Big Data technologies like Hadoop, Spark, and Cassandra. Proficiency in AWS services including EMR, EC2, S3, Lambda, and Step Functions is a must. Knowledge of REST APIs, SOAP web services, and Linux scripting is also necessary for this role. As a Big Data Engineer, your responsibilities will include designing and implementing scalable data pipelines, optimizing system performance, diagnosing defects through root cause analysis, and ensuring compliance with coding standards and best practices. Effective communication skills and the ability to collaborate with diverse teams are essential. We are looking for someone who is proactive in learning new technologies and applying them effectively in the workplace. Key Skills: - Cloud data infrastructure - Data processing - Communication If you are passionate about working with cutting-edge technologies and driving innovation in the field of Big Data engineering, we encourage you to apply for this exciting opportunity.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
bangalore, karnataka
On-site
As a Principal Data Engineer at FTSE Russell, you will play a crucial role in leading the development of foundational components for a lakehouse architecture on AWS. Your primary responsibility will involve driving the migration of existing data processing workflows to the new lakehouse solution. Collaboration across the Data Engineering organization is essential as you design and implement scalable data infrastructure and processes using cutting-edge technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue, and Glue Data Catalog. Success in this role hinges on your deep technical expertise, exceptional problem-solving skills, and your ability to lead and mentor within an agile team. You will lead complex projects autonomously, setting a high standard for technical contributions while fostering an inclusive and open culture within development teams. Your strategic guidance on best practices in design, development, and implementation will ensure that solutions meet business requirements and technical standards. In addition to project leadership and culture building, you will be responsible for data development and tool advancement. This includes writing high-quality, efficient code, developing necessary tools and applications, and leading the development of innovative tools and frameworks to enhance data engineering capabilities. Your role will also involve solution decomposition and design leadership, working closely with architects, Product Owners, and Dev team members to decompose solutions into Epics while establishing and enforcing best practices for coding standards, design patterns, and system architecture. Stakeholder relationship building and communication are crucial aspects of this role, as you build and maintain strong relationships with internal and external stakeholders, serving as an internal subject matter expert in software development. To excel as a Principal Data Engineer, you should possess a Bachelor's degree in computer science, Software Engineering, or a related field. A master's degree or relevant certifications such as AWS Certified Solutions Architect or Certified Data Analytics would be advantageous. Proficiency in advanced programming, system architecture, and solution design, along with key skills in software development practices, Python, Spark, automation, CI/CD pipelines, cross-functional collaboration, technical leadership, and AWS Cloud Services are essential for success in this role. At FTSE Russell, we champion a culture committed to continuous learning, mentoring, and career growth opportunities while fostering a culture of inclusion for all employees. Join us in driving financial stability, empowering economies, and creating sustainable growth as part of our dynamic and diverse team. Your individuality, ideas, and commitment to sustainability will be valued as we work together to make a positive impact globally.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for leading the development of foundational components for a lakehouse architecture on AWS and overseeing the migration of existing data processing workflows to the new lakehouse solution. Your role will involve collaborating with the Data Engineering team to design and implement scalable data infrastructure and processes using technologies such as Python, PySpark, EMR Serverless, Iceberg, Glue, and Glue Data Catalog. The primary objective of this position is to ensure a successful migration and establish robust data quality governance across the new platform to enable reliable and efficient data processing. To excel in this position, you will need to demonstrate deep technical expertise, exceptional problem-solving skills, and the ability to lead and mentor within an agile team. Your key responsibilities will include leading complex projects independently, fostering an inclusive and open culture within development teams, and setting high standards for technical contributions. You will provide strategic guidance on best practices in design, development, and implementation to ensure that solutions meet business requirements and technical standards. Additionally, you will be involved in writing high-quality, efficient code, developing tools and applications to address complex business needs, and leading the development of innovative tools and frameworks to enhance data engineering capabilities. Collaboration with architects, Product Owners, and Dev team members will be essential to decompose solutions into Epics, lead the design and planning of these components, and drive the migration of existing data processing workflows to the Lakehouse architecture leveraging Iceberg capabilities. You will establish and enforce best practices for coding standards, design patterns, and system architecture, utilizing existing design patterns to develop reliable solutions while also recognizing when to adapt or avoid patterns to prevent anti-patterns. In terms of qualifications and experience, a Bachelor's degree in computer science, Software Engineering, or a related field is essential. A master's degree or relevant certifications such as AWS Certified Solutions Architect or Certified Data Analytics is advantageous. Proficiency in advanced programming, system architecture, and solution design is required. You should possess key skills in advanced software development practices, Python and Spark expertise, automation and CI/CD pipelines, cross-functional collaboration and communication, technical leadership and mentorship, domain expertise in AWS Cloud Services, and quality assurance and continuous improvement practices. Your role would involve working within a culture committed to continuous learning, mentoring, career growth opportunities, and inclusion for all employees. You will be part of a collaborative and creative environment where new ideas are encouraged, and sustainability is a key focus. The values of Integrity, Partnership, Excellence, and Change underpin the organization's purpose of driving financial stability, empowering economies, and enabling sustainable growth. LSEG offers various benefits, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer and Community Banking - Consumer Card Technology, you will be a key member of an agile team, designing and delivering innovative technology products. This role offers the opportunity to work on critical technology solutions that support the firm's business objectives in a secure, stable, and scalable manner. You will be responsible for executing software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Your role will involve creating secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems. You will produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture will be a key aspect of your responsibilities. Furthermore, you will contribute to software engineering communities of practice and events that explore new and emerging technologies. Required qualifications, capabilities, and skills include formal training or certification on software engineering concepts and 3+ years applied experience. Hands-on practical experience in system design, application development, testing, and operational stability is essential. You should have experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages. Your ability to analyze complex systems, conduct failure analysis/root cause analysis, and develop and maintain technical engineering documentation is crucial. Expertise in DevOps processes within a Cloud/SaaS environment (AWS), service-oriented architecture, web services/API, and modern software languages, with familiarity in Agile and lean methodologies is required. Experience in Site Reliability Engineering (SRE) practices, including monitoring, incident response, and system reliability with proficiency in CI/CD tools such as Jenkins, GitHub, Terraform, Container Registry, etc., is expected. Hands-on experience and strong knowledge in Python and integrating with Python-based applications is necessary. Understanding and hands-on experience on AWS Lambda, Glue, IAM, KMS, API Gateways, SNS, SQS, Step functions, Event Bridge, EC2, ECS, Load Balancers & Skills in performance and cost optimization of AWS services are essential. Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies, exposure to cloud technologies, knowledge on Analytical App development: REACT / JavaScript / CS, and experience having been a technical lead/mentor for a team. You must have experience working in a team and the ability to tackle design and functionality problems independently with little to no oversight. Strong written and oral communication skills are required, along with the ability to communicate effectively with all levels of management and partners from a variety of business functions.,
Posted 1 week ago
5.0 - 8.0 years
14 - 19 Lacs
pune, chennai, bengaluru
Hybrid
Location : Mumbai, Bangalore, Pune, Chennai ,Hyderabad, Kolkata, Noida, Kochi, Coimbatore, Mysore, Nagpur, Bhubaneswar, Indore, Warangal Job description Senior backend python developer years of total experience 7 years AWS atleast 2 years Role Description The role is part of the Senior Cloud Platform Engineer who will be responsible for designing and developing solutions necessary for cloud adoption and automation for egbuild libraries patterns standards governance and deploy everything via code IaC This role will also require person to have strong handson Cloud Components and service with development mind sets and strong understanding of Coding CICD SDLC Agile concepts and best practices Skills Required Strong knowledge handson preferably on AWS components and services like Lambda function SQS SNS Step Function DynamoDB IAM S3 API Gateway Strong development experience preferably in AWS CDK good to knowserverless framework and Python programming ECR ECS good to know Ability to write Python unit test cases PYtest Individual contributor and able to lead solution and mentor team Reporting relationship This role will report to Cloud Program Manager Expectation Strong Cloud ecosystem understanding Strong Development mindset Best Practices adopter Quick learner and troubleshooter Team Player and Collaborator Focused and speedy delivery Education Graduate Bachelors degree Engineering preferred AWS Developer Associate certification good to have AWS Certified Architect good to have Skills Mandatory Skills : Microservices, Python, AWS Lambda, AWS RDS,AWS S3,AWS API Gateway, SQS, SNS, Aws Step Functions, Django, Docker, Dynamo DB
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Solutions Specialist at Quantiphi, you will be part of a global and diverse culture that values transparency, diversity, integrity, learning, and growth. We take pride in fostering an environment that encourages innovation and excellence, not only in your professional endeavors but also in your personal life. Key Responsibilities: - Utilize your 3+ years of hands-on experience to deliver data solutions that drive business outcomes. - Develop data pipelines using PySpark within Databricks implementations. - Work with Databricks Workspaces, Notebooks, Delta Lake, and APIs to streamline data processes. - Utilize your expertise in Python, Scala, and advanced SQL for effective data manipulation and optimization. - Implement data integration projects using ETL to ensure seamless data flow. - Build and deploy cloud-based solutions at scale by ingesting data from sources like DB2. Preferred Skills: - Familiarity with AWS services such as S3, Redshift, and Secrets Manager. - Experience in implementing data integration projects using ETL, preferably with tools like Qlik Replicate & QlikCompose. - Proficiency in using orchestration tools like Airflow or Step Functions for workflow management. - Exposure to Infrastructure as Code (IaC) tools like Terraform and Continuous Integration/Continuous Deployment (CI/CD) tools. - Previous involvement in migrating on-premises data to the cloud and processing large datasets efficiently. - Knowledge in setting up data lakes and data warehouses on cloud platforms. - Implementation of industry best practices to ensure high-quality data solutions. If you are someone who thrives in an environment of wild growth and enjoys collaborating with happy, enthusiastic over-achievers, Quantiphi is the place for you to nurture your career and personal development. Join us in our journey of innovation and excellence!,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
rajasthan
On-site
You are a Senior Node.js Cloud Developer joining Improving South America, a global project focusing on financial management software development in collaboration with a client present in Canada, USA, Europe, and Asia. Your role involves backend development using Node.js and TypeScript, ensuring quality, performance, and scalability. You will be responsible for implementing Serverless and Event-Driven architectures, managing serverless functions and services using Serverless Framework, integrating AWS services like Lambda, API Gateway, EventBridge, SNS, SQS, and Step Functions, and working on infrastructure as code using Terraform or AWS CDK. Your experience in AWS environments, specifically with Node.js and TypeScript, is crucial. Collaboration with international team members on key technical decisions is a significant aspect of this role. To excel in this position, you must have over 10 years of software development experience, with at least 5 years dedicated to Node.js. Proficiency in TypeScript and a deep understanding of Serverless development and Event-Driven architecture are essential. Hands-on experience with AWS services, such as Lambda, API Gateway, EventBridge, SNS, SQS, and Step Functions, along with knowledge of Infrastructure as Code using Terraform or AWS CDK, is required. Demonstrable expertise in AWS environments is prioritized, and an intermediate/advanced level of English proficiency is mandatory. Possessing an AWS certification is considered a bonus. Strong communication skills, teamwork abilities, organizational prowess, and time management skills are highly valued. In return, you will receive a competitive dollarized salary, Metlife benefits in Chile, 100% remote work flexibility, the opportunity for bi-annual bonuses and salary reviews, vacation days, English classes, Apple equipment, access to Udemy courses, a budget for book purchases, and materials for work needs. The position offers flexible hours and schedules, allowing you the freedom to attend to family obligations or personal errands. Internal talks and presentations are encouraged during working hours, and Improving South America provides the necessary computer equipment for your tasks. If you meet the specified requirements and are interested in this opportunity, please send your CV through Get on Board. The GETONBRD Job ID for this position is 53789. This role is 100% remote, but candidates must be based in Argentina, Colombia, Chile, Peru, or Uruguay.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Engineer & Architect with 10-12 years of experience, you will be responsible for designing and implementing enterprise-grade Data Lake solutions using AWS technologies such as S3, Glue, and Lake Formation. Your expertise in building Data Lakes and proficiency with AWS tools like S3, EC2, Redshift, Athena, and Airflow will be essential in optimizing cloud infrastructure for performance, scalability, and cost-effectiveness. You will be required to define data architecture patterns, best practices, and frameworks for handling large-scale data ingestion, storage, compute, and processing. Developing and maintaining ETL pipelines using tools like AWS Glue, creating robust Data Warehousing solutions using Redshift, and ensuring high data quality and integrity across all pipelines will be key aspects of your role. Collaborating with business stakeholders to define key metrics, deliver actionable insights, and designing and deploying dashboards and visualizations using tools like Tableau, Power BI, or Qlik will be part of your responsibilities. You will also be involved in implementing best practices for data encryption, secure data transfer, and role-based access control to maintain data security. As a Senior Data Engineer & Architect, you will lead audits and compliance certifications, work closely with cross-functional teams including Data Scientists, Analysts, and DevOps engineers, and mentor junior team members. Your role will also involve partnering with stakeholders to define and align data strategies that meet business objectives. Clovertex offers a competitive salary and benefits package, reimbursement for AWS certifications, a Hybrid work model for maintaining work-life balance, and health insurance and benefits for employees and dependents. If you have a Bachelor of Engineering degree in Computer Science or a related field, AWS Certified Solutions Architect Associate certification, and experience with Agile/Scrum methodologies, this role is perfect for you.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a Senior Cloud Engineer with a specialization in DevOps, looking to join the dynamic team at QloudX in Pune. QloudX is a leading service-based company that excels in providing world-class cloud-native solutions to clients globally. As an AWS advanced partner with competencies in various areas, QloudX engages in large-scale cloud migrations and the development of cloud-native solutions from scratch, primarily utilizing AWS. In this role, you will collaborate with a team of highly motivated and proactive engineers who are dedicated to achieving excellence in cloud technologies. The ideal candidate will possess the AWS DevOps Engineer Professional certification and have extensive experience working with AWS and Kubernetes. Additionally, holding certifications such as Certified Kubernetes Administrator (CKA) and familiarity with CKAD and OpenShift will be advantageous. With at least 5 years of industry experience, including a minimum of 3 years in AWS and 1 year in Kubernetes, you will be responsible for creating and maintaining foundational systems that support various development teams within the organization. Your expertise will be crucial in guiding and supporting developers on AWS, Kubernetes, and Linux-related issues. Your role will involve building advanced cloud infrastructure using tools like Terraform and CloudFormation. Proficiency in writing custom Terraform modules and experience in automation using Python, Bash, and AWS services like Lambda, EventBridge, and Step Functions will be essential. Moreover, you will play a key role in establishing robust monitoring systems for large-scale production environments. Familiarity with tools such as CloudWatch, Prometheus, Grafana, and third-party monitoring systems like Datadog is required. You will also be involved in investigating and resolving operational issues as they arise. At QloudX, emphasis is placed on security, and you will be expected to have experience in building secure systems in both cloud and Kubernetes environments. This includes implementing security measures such as static analysis of application code, vulnerability scanning of Docker images, and utilizing tools like Checkov for Terraform and OPA/Kyverno for Kubernetes. Additionally, experience with Infrastructure as Code (IaC) and Configuration as Code (CaC), particularly with Ansible, Chef, or Puppet, will be beneficial for this role. If you are passionate about cloud technologies, automation, and security, and possess the necessary skills and certifications, we invite you to consider this exciting opportunity to be part of the innovative team at QloudX.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |