Jobs
Interviews

219 Cloudformation Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Consultant specializing in AWS Rehost Migration, you will be responsible for leveraging your 8+ years of technical expertise to facilitate the seamless transition of IT infrastructure from On-Prem to any Cloud environment. Your role will involve creating landing zones and overseeing application migration processes. Your key responsibilities will include assessing the source architecture and aligning it with the relevant target architecture within the cloud ecosystem. You must possess a strong foundation in Linux or Windows-based systems administration, with a deep understanding of Storage, Security, and network protocols. Additionally, your proficiency in firewall rules, VPC setup, network routing, Identity and Access Management, and security implementation will be crucial. To excel in this role, you should have hands-on experience with CloudFormation, Terraform templates, or similar automation and scripting tools. Your expertise in implementing AWS services such as EC2, Autoscaling, ELB, EBS, EFS, S3, VPC, RDS, and Route53 will be essential for successful migrations. Furthermore, your familiarity with server migration tools like Platespin, Zerto, Cloud Endure, MGN, or similar platforms will be advantageous. You will also be required to identify application dependencies using discovery tools or automation scripts and define optimal move groups for migrations with minimal downtimes. Your effective communication skills, both verbal and written, will enable you to collaborate efficiently with internal and external stakeholders. By working closely with various teams, you will contribute to the overall success of IT infrastructure migrations and ensure a smooth transition to the cloud environment. If you are a seasoned professional with a passion for cloud technologies and a proven track record in IT infrastructure migration, we invite you to join our team as a Lead Consultant - AWS Rehost Migration.,

Posted 4 days ago

Apply

1.0 - 5.0 years

0 Lacs

kozhikode, kerala

On-site

As a skilled and motivated Software Engineer with proficiency in Java, Angular, and hands-on experience with AWS cloud services, you will be responsible for designing, developing, testing, and deploying scalable software solutions that power the products and services at Blackhawk Network. Collaborating with cross-functional teams, you will deliver high-quality code in a fast-paced environment. Your responsibilities will include designing, developing, and maintaining scalable backend and frontend applications using Java and JavaScript frameworks like Node.js, Angular, or similar. Leveraging AWS cloud services such as Lambda, EC2, S3, API Gateway, RDS, ECS, and CloudFormation, you will deliver resilient cloud-native solutions. Writing clean, testable, and maintainable code following modern software engineering practices is crucial. Additionally, active participation in Agile ceremonies, including sprint planning, daily standups, and retrospectives, is expected. Collaboration with product managers, designers, and engineering peers to define, develop, and deliver new features is essential. Monitoring application performance, troubleshooting issues, and driving optimizations to ensure high availability and responsiveness are key responsibilities. Engaging in a rotating support schedule (2-sprint rotation) and participating in on-call responsibilities will be part of your role. Utilizing observability and monitoring tools to ensure system reliability and proactive issue detection is necessary. Qualifications for this role include 1-2 years of professional software development experience, strong proficiency in Java and JavaScript frameworks, and hands-on experience deploying applications using AWS services in production environments. A solid understanding of RESTful API design, asynchronous data handling, and event-driven architecture is required. Familiarity with DevOps best practices, including version control using Git and automated deployments, is expected. Experience with observability tools for logging, monitoring, and alerting is a plus. Being a strategic thinker with strong problem-solving skills, a passion for continuous learning and improvement, and effective communication skills are essential. A collaborative mindset, the ability to work closely with cross-functional teams, and a Bachelor's degree in computer science, Engineering, or a related field are required. Advanced degrees are considered a plus. Finally, the ability to thrive in a dynamic, fast-paced environment and adapt to changing technologies and priorities is crucial for success in this role.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Systems Engineer specializing in Data DevOps/MLOps, you will play a crucial role in our team by leveraging your expertise in data engineering, automation for data pipelines, and operationalizing machine learning models. This position requires a collaborative professional who can design, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment. You will be responsible for building and maintaining infrastructure for data processing and model training using cloud-native tools and services. Your role will involve automating processes for data validation, transformation, and workflow orchestration, ensuring seamless integration of ML models into production. You will work closely with data scientists, software engineers, and product teams to optimize performance and reliability of model serving and monitoring solutions. Managing data versioning, lineage tracking, and reproducibility for ML experiments will be part of your responsibilities. You will also identify opportunities to enhance scalability, streamline deployment processes, and improve infrastructure resilience. Implementing security measures to safeguard data integrity and ensure regulatory compliance will be crucial, along with diagnosing and resolving issues throughout the data and ML pipeline lifecycle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field, along with 4+ years of experience in Data DevOps, MLOps, or similar roles. Proficiency in cloud platforms like Azure, AWS, or GCP is required, as well as competency in using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Expertise in containerization and orchestration technologies like Docker and Kubernetes is essential, along with a background in data processing frameworks such as Apache Spark or Databricks. Skills in Python programming, including proficiency in data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch, are necessary. Familiarity with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions, as well as understanding version control tools like Git and MLOps platforms such as MLflow or Kubeflow, will be valuable. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana), strong problem-solving skills, and the ability to contribute independently and within a team are also required. Excellent communication skills and attention to documentation are essential for success in this role. Nice-to-have qualifications include knowledge of DataOps practices and tools like Airflow or dbt, an understanding of data governance concepts and platforms like Collibra, and a background in Big Data technologies like Hadoop or Hive. Qualifications in cloud platforms or data engineering would be an added advantage.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are an experienced OpenText ECM Content Server Senior Consultant with 4 to 7 years of experience, who will be responsible for leading enterprise content management solutions. Your role involves designing integration architectures, developing secure applications and workflows, defining security measures, and leading proof-of-concept initiatives. You will also provide mentorship to development teams, collaborate with stakeholders, and coordinate with infrastructure teams for deployment and operations. Key responsibilities include leading architectural design and strategy for OpenText Content Server solutions, designing integration architectures connecting Content Server with external systems, developing secure and scalable applications and APIs, defining security measures and enterprise content governance, leading proof-of-concept initiatives and technical assessments, providing mentorship to development teams, and collaborating with stakeholders. Required technical skills include 4+ years of experience in OpenText Content Server and architecture, 2+ years of experience in Jscript, extensive experience with integrations, strong programming skills in Java and JavaScript, knowledge of enterprise integration patterns, API gateways, and message brokers, database expertise, familiarity with enterprise security frameworks, and experience with WebReports, LiveReports, GCI PowerTools, and Content Server modules. Preferred skills include experience with OpenText Extended ECM suite, frontend development with React, Angular, or Vue.js, DevOps practices, knowledge of compliance standards, and OpenText certifications. Professional traits required for the role include strong leadership, mentoring, and team guidance, excellent communication skills, strategic thinking, project management skills, and experience with vendor management and technical governance.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are looking for someone with 2-4 years of experience in cloud platforms like AWS, Azure, and Google Cloud Platform (GCP). This position is based in Ahmedabad. As a Cloud Research Associate (RA), you will provide expertise and guidance on cloud computing technologies. Your responsibilities will include ensuring effective implementation and management of cloud infrastructure, driving innovation through cloud solutions, and collaborating with IT teams, stakeholders, and external vendors to maintain secure and efficient cloud environments. Your key responsibilities will involve leading cloud migration projects, monitoring and optimizing cloud infrastructure for performance and availability, providing technical support and troubleshooting for cloud-related issues, ensuring security and compliance with best practices, staying updated on cloud computing trends, recommending and implementing innovative cloud solutions, mentoring team members on cloud technologies, and collaborating with cross-functional teams to deliver comprehensive cloud solutions. To excel in this role, you should have in-depth knowledge of cloud services and deployment models, hands-on experience with AWS, Azure, and GCP, proficiency with infrastructure as code (IaC) tools, a strong understanding of networking, virtualization, and storage in cloud environments, familiarity with DevOps and Agile methodologies, coding experience in languages like Java, Python, NodeJS, and Bash, and experience in creating, deploying, and operating large-scale applications on cloud platforms. Relevant cloud certifications such as AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Associate, Google Cloud Certified, or similar are preferred, along with Terraform certification. Soft skills such as problem-solving, analytical, communication, and presentation skills are essential. You should be able to work independently and as part of a team, possess strong project management and organizational skills, work with geographically dispersed teams, have a passion for learning new technologies, and excel in managing client interactions. This role may require occasional travel to support cloud initiatives and attend conferences or training sessions. You will typically work in an office environment and may be required to work off business hours based on customer downtime.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As an AWS DevOps Specialist at Darwix AI, you will be part of India's fastest-growing Gen-AI startup that is revolutionizing enterprise sales and revenue enablement through advanced AI-driven platforms. At Darwix AI, we empower top global enterprises with real-time insights, intelligent analytics, and personalized customer engagement nudges, all supported by leading global investors. Your primary responsibility will be to own, manage, and optimize our AWS-based infrastructure to ensure the stability, security, and reliability of our SaaS platforms. This includes managing and scaling the cloud infrastructure, administering servers and databases, monitoring uptime and security, and developing robust CI/CD pipelines for automated testing, deployments, and rollback procedures. You will also be responsible for maintaining and troubleshooting our Moodle (PHP-based) environment, deploying application updates and patches, managing relational database systems (MySQL/PostgreSQL), implementing backup solutions, and ensuring data integrity. Additionally, you will play a key role in ensuring infrastructure security and compliance, monitoring and optimizing cloud costs, and collaborating with cross-functional teams for seamless deployments and issue resolution. To qualify for this role, you should have at least 3 years of relevant experience in AWS infrastructure management, DevOps, or Cloud Operations, along with strong expertise in AWS services such as EC2, S3, RDS, Lambda, CloudWatch, and IAM. Hands-on experience in PHP application server management, GitHub workflows, CI/CD tools, Docker, Kubernetes, and infrastructure-as-code tools like Terraform is essential. Excellent troubleshooting, debugging, and performance-tuning skills are also required, along with the ability to operate independently and manage multiple priorities simultaneously. If you are a professional with 2-5 years of hands-on DevOps/AWS infrastructure experience, enjoy autonomy and ownership in fast-paced startup environments, and are passionate about optimizing cloud infrastructure and processes, then this role is for you. A Bachelors degree in Engineering, Computer Science, or a related technical discipline, AWS certifications, and a proven track record in managing SaaS/cloud infrastructure at high-growth startups are also required. Joining Darwix AI will give you the opportunity to be part of a fast-growing AI SaaS startup, shape the future of AI-driven enterprise sales technology, and experience significant career growth and learning in cutting-edge AI infrastructure and global markets. You will have high ownership, flexibility, and autonomy in managing critical company infrastructure. If you thrive under pressure, enjoy solving challenging technical problems, and are ready to own the infrastructure of a fast-growing Gen-AI startup, apply now and join Darwix AI!,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You are looking for a DevOps Technical Lead who will play a crucial role in leading the development of an Infrastructure Agent powered by Generative AI (GenAI) technology. In this role, you will be responsible for designing and implementing an intelligent Infra Agent that can handle provisioning, configuration, observability, and self-healing autonomously. Your key responsibilities will include leading the architecture and design of the Infra Agent, integrating various automation frameworks to enhance DevOps workflows, automating infrastructure provisioning and incident remediation, developing reusable components and frameworks using Infrastructure as Code (IaC) tools, and collaborating with AI/ML engineers and SREs to create intelligent infrastructure decision-making logic. You will also be expected to implement secure and scalable infrastructure on cloud platforms such as AWS, Azure, and GCP, continuously improve agent performance through feedback loops, telemetry, and model fine-tuning, drive DevSecOps best practices, compliance, and observability, as well as mentor DevOps engineers and work closely with cross-functional teams. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 8 years of experience in DevOps, SRE, or Infrastructure Engineering. You must have proven experience in leading infrastructure automation projects, expertise with cloud platforms like AWS, Azure, GCP, and deep knowledge of tools such as Terraform, Kubernetes, Helm, Docker, Jenkins, and GitOps. Hands-on experience with LLMs/GenAI APIs, familiarity with automation frameworks, and proficiency in programming/scripting languages like Python, Go, or Bash are also required. Preferred qualifications for this role include experience in building or fine-tuning LLM-based agents, contributions to open-source GenAI or DevOps projects, understanding of MLOps pipelines and AI infrastructure, and certifications in DevOps, cloud, or AI technologies.,

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an AI/ML Engineer, your primary responsibility will be to collaborate effectively with cross-functional teams, including data scientists and product managers. Together, you will work on acquiring, processing, and managing data for the integration and optimization of AI/ML models. Your role will involve designing and implementing robust, scalable data pipelines to support cutting-edge AI/ML models. Additionally, you will be responsible for debugging, optimizing, and enhancing machine learning models to ensure quality assurance and performance improvements. Operating container orchestration platforms like Kubernetes with advanced configurations and service mesh implementations for scalable ML workload deployments will be a key part of your job. You will also design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engaging in advanced prompt engineering and fine-tuning of large language models (LLMs) will be crucial, with a focus on semantic retrieval and chatbot development. Documentation will be an essential aspect of your work, involving the recording of model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Researching and implementing cutting-edge LLM optimization techniques such as quantization and knowledge distillation will be part of your ongoing tasks to ensure efficient model performance and reduced computational costs. Collaborating closely with stakeholders to develop innovative natural language processing solutions, with a specialization in text classification, sentiment analysis, and topic modeling, will be another significant aspect of your role. Staying up-to-date with industry trends and advancements in AI technologies and integrating new methodologies and frameworks to continually enhance the AI engineering function will also be expected of you. In terms of qualifications, a Bachelor's degree in any Engineering stream is required, along with a minimum of 4+ years of relevant experience in AI. Proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) is essential. Experience with LLM frameworks, big data processing using Spark, version control, and experiment tracking, as well as proficiency in software engineering and development, DevOps, infrastructure, cloud services, and LLM project experience are also necessary. Your expertise should include a strong mathematical foundation in statistics, probability, linear algebra, and optimization, along with a deep understanding of ML and LLM development lifecycle. Additionally, you should have expertise in feature engineering, embedding optimization, dimensionality reduction, A/B testing, experimental design, statistical hypothesis testing, RAG systems, vector databases, semantic search implementation, and LLM optimization techniques. Strong analytical thinking, excellent communication skills, experience translating business requirements into data science solutions, project management skills, collaboration abilities, dedication to staying current with the latest ML research, and the ability to mentor and share knowledge with team members are essential competencies for this role.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Python Backend Engineer with a strong background in AWS and AI/ML. Your primary responsibility will be to design, develop, and maintain Python-based backend systems and AI-powered services. You will be tasked with building and managing RESTful APIs using Django or FastAPI for AI/ML model integration. Additionally, you will develop and deploy machine learning and GenAI models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Your expertise in implementing GenAI pipelines using Langchain will be crucial, and experience with LangGraph is considered a strong advantage. You will leverage various AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Collaborating with data scientists, DevOps, and architects to integrate models and workflows into production will be a key aspect of your role. Furthermore, you will be responsible for building and managing CI/CD pipelines for backend and model deployments. Ensuring the performance, scalability, and security of applications in cloud environments will be paramount. Monitoring production systems, troubleshooting issues, and optimizing model and API performance will also fall under your purview. To excel in this role, you must possess at least 5 years of hands-on experience in Python backend development. Your strong experience in building RESTful APIs using Django or FastAPI is essential. Proficiency in AWS cloud services, a solid understanding of ML/AI concepts, and experience with ML libraries are prerequisites. Hands-on experience with Langchain for building GenAI applications and familiarity with DevOps tools and microservices architecture will be beneficial. Additionally, having Agile development experience and exposure to tools like Docker, Kubernetes, Git, Jenkins, Terraform, and CI/CD workflows will be advantageous. Experience with LangGraph, LLMs, embeddings, and vector databases, as well as knowledge of MLOps tools and practices, are considered nice-to-have qualifications. In summary, as a Python Backend Engineer with expertise in AWS and AI/ML, you will play a vital role in designing, developing, and maintaining intelligent backend systems and GenAI-driven applications. Your contributions will be instrumental in scaling backend systems and implementing AI/ML applications effectively.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

At NiCE, we challenge our limits and strive to be game changers in everything we do. We are ambitious, set the highest standards, and execute beyond them. If you are like us, we have the ultimate career opportunity that will ignite a fire within you. As a DevOps Engineer at NiCE, you will be responsible for designing, implementing, and managing the infrastructure and automation processes that facilitate seamless software delivery and system reliability at scale. Your role will involve leading DevOps initiatives, optimizing cloud environments, and collaborating across development, QA, and operations teams to enhance efficiency, security, and scalability. You will have the opportunity to make a significant impact by: - Designing and executing DevOps practices with a focus on automation, scalability, and continuous improvement of development pipelines and infrastructure. - Building and managing highly efficient CI/CD pipelines to streamline the software release process. - Implementing and managing infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. - Optimizing and managing cloud environments (AWS, Azure, GCP) to ensure high availability and security. - Developing and maintaining monitoring solutions for system reliability, performance, and early issue detection. - Collaborating with cross-functional teams and providing guidance on best practices for automation, security, and deployment. - Ensuring compliance with security standards and implementing DevSecOps practices for secure software delivery. To excel in this role, you should have: - 8-12 years of experience in DevOps. - A Bachelor's degree in computer science or equivalent. - Proficiency in developing user-interface capabilities, software functionality, and complex web-based architectures. - Experience in designing and developing software features according to design documents and enterprise software standards. - Ability to work in multi-disciplinary Agile teams and adopt Agile methodologies. - Experience with UI development activities and interface with various R&D groups and support tiers. Additionally, having knowledge of agile development processes would be an advantage. At NiCE, you will join a market-disrupting global company where the best of the best work in a fast-paced, collaborative, and creative environment. You will have endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and always looking to raise the bar, you may be our next NiCEr! Join us in the NiCE-FLEX hybrid model, which allows for maximum flexibility with 2 days working from the office and 3 days of remote work each week. Office days focus on face-to-face meetings that foster teamwork, collaborative thinking, innovation, and a vibrant atmosphere. Requisition ID: 7568 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE: NICELtd. (NASDAQ: NICE) is a global leader in software products used by over 25,000 businesses, including 85 of the Fortune 100 corporations. NiCE software helps deliver extraordinary customer experiences, combat financial crime, and ensure public safety. With over 8,500 employees across 30+ countries, NiCE is known for its innovation in AI, cloud, and digital technologies.,

Posted 6 days ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at our company, you will play a crucial role in designing, developing, and optimizing data pipelines and workflows in a cloud-based environment. Your expertise in PySpark, Snowflake, and AWS will be key as you leverage these technologies for data processing and analytics. Your responsibilities will include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, and managing and configuring various AWS services such as S3, Lambda, Glue, EMR, and Redshift. Collaboration with data analysts and business teams to understand requirements and deliver solutions will be essential, along with ensuring data security and compliance with best practices in AWS and Snowflake environments. Monitoring and troubleshooting data pipelines and workflows for performance and reliability, as well as writing efficient, reusable, and maintainable code for data processing and transformation, will also be part of your role. To excel in this position, you should have strong experience with AWS services like S3, Lambda, Glue, and MSK, proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, and a solid understanding of SQL and database optimization techniques. Knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems like Git, as well as strong problem-solving and debugging skills are also required. Experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, and understanding of data governance and security best practices will be beneficial. Certification in AWS or Snowflake is a plus. You should hold a Bachelor's degree in Computer Science, Engineering, or a related field with 6 to 10 years of experience, including 5+ years of experience in AWS cloud engineering and 2+ years of experience with PySpark and Snowflake. Join us in our Technology team as a valuable member of the Digital Software Engineering job family, working full-time to contribute your most relevant skills while continuously growing and expanding your expertise.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

You have an exciting opportunity to join as a DevSecOps in Sydney. As a DevSecOps, you should have 3+ years of extensive Python proficiency and 3+ years of Java Experience. Your role will also require extensive exposure to technologies such as Javascript, Jenkins, Code Pipeline, CodeBuild, and AWS" ecosystem including AWS Well Architected Framework, Trusted Advisor, GuardDuty, SCP, SSM, IAM, and WAF. It is essential for you to have a deep understanding of automation, quality engineering, architectural methodologies, principles, and solution design. Hands-on experience with Infrastructure-As-Code tools like CloudFormation and CDK will be preferred for automating deployments in AWS. Moreover, familiarity with operational observability, including log aggregation, application performance monitoring, deploying auto-scaling and load-balanced / Highly Available applications, and managing certificates (client-server, mutual TLS, etc) is crucial for this role. Your responsibilities will include improving the automation of security controls, working closely with the consumer showback team on defining processes and system requirements, and designing and implementing updates to the showback platform. You will collaborate with STO/account owners to uplift the security posture of consumer accounts, work with the Onboarding team to ensure security standards and policies are correctly set up, and implement enterprise minimum security requirements from the Cloud Security LRP, including Data Masking, Encryption monitoring, Perimeter protections, Ingress / Egress uplift, and Integration of SailPoint for SSO Management. If you have any questions or need further clarification, feel free to ask.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The client is a global leader in delivering cutting-edge inflight entertainment and connectivity (IFEC) solutions. As a developer in this role, you will be responsible for building user interfaces using Flutter, React.js, or similar frontend frameworks. You will also develop backend services and APIs using Python, ensuring smooth data flow between the frontend and backend by working with REST APIs. Additionally, you will utilize Linux terminal and bash scripting for basic automation and tasks, manage code using Git, and set up CI/CD pipelines using tools like GitLab CI/CD. Deployment and management of services on AWS (CloudFormation, Lambda, API Gateway, ECS, VPC, etc.) will be part of your responsibilities. It is essential to write clean, testable, and well-documented code while collaborating with other developers, designers, and product teams. Requirements: - Minimum 3 years of frontend software development experience - Proficiency in GUI development using Flutter or other frontend stacks (e.g., React.js) - 3+ years of Python development experience - Experience with Python for backend and API server - Proficiency in Linux terminal and bash scripting - Familiarity with GitLab CI/CD or other CI/CD tools - AWS experience including CloudFormation, API Gateway, ECS, Lambda, VPC - Bonus: Data science skills with experience in the pandas library - Bonus: Experience with the development of recommendation systems and LLM-based applications If you find this opportunity intriguing and aligning with your expertise, please share your updated CV and relevant details with pearlin.hannah@antal.com.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an AWS Developer at Marlabs Innovations Pvt Ltd, your main responsibility will be to design and implement a secure and scalable API Gateway on AWS. This API Gateway will serve as the integration point between a Salesforce Force.com application and an LLM (Large Language Model) AI endpoint service. You should have hands-on experience in creating serverless architectures, securing APIs, and connecting cloud-native services with third-party applications and AI/ML platforms. Your key responsibilities will include designing and developing a secure API Gateway on AWS to enable seamless communication between Salesforce and an AI endpoint. You will need to implement Lambda functions, IAM roles, and various authentication mechanisms such as OAuth, API Keys, and Cognito. Ensuring secure, low-latency, and scalable message flow between the Force.com backend and external LLM APIs will be crucial. Additionally, you will be responsible for integrating with Salesforce via REST APIs, managing authentication tokens, and optimizing API performance while handling error retries, logging, and monitoring through CloudWatch. Furthermore, you will need to ensure a fault-tolerant architecture with high availability using services like API Gateway, Lambda, S3, DynamoDB, or other relevant AWS offerings. Collaborating with AI teams to consume LLM endpoints from platforms like OpenAI, Anthropic, or custom-hosted models will also be part of your role. Following best practices in DevOps and Infrastructure as Code (IaC) using tools like CloudFormation or Terraform will be expected. To be successful in this role, you should have strong hands-on experience with AWS API Gateway, AWS Lambda, and IAM. Proficiency in Python or Node.js for Lambda development is required, as well as experience integrating with Salesforce REST APIs and authentication workflows. A good understanding of LLM APIs, AI service integration, secure API development practices, event-driven architectures, and serverless frameworks is essential. Experience with CI/CD pipelines, CloudFormation, or Terraform, along with strong troubleshooting and debugging skills in cloud environments, will be beneficial. Preferred qualifications for this position include being an AWS Certified Developer Associate or holding an equivalent certification. Prior experience in integrating Salesforce with external cloud services, knowledge of AI/ML pipelines, REST-based AI model interactions, and familiarity with API monitoring and analytics tools would be advantageous.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Platform developer at Barclays, you will play a crucial role in shaping the digital landscape and enhancing customer experiences. Leveraging cutting-edge technology, you will work alongside a team of engineers, business analysts, and stakeholders to deliver high-quality solutions that meet business requirements. Your responsibilities will include tackling complex technical challenges, building efficient data pipelines, and staying updated on the latest technologies to continuously enhance your skills. To excel in this role, you should have hands-on coding experience in Python, along with a strong understanding and practical experience in AWS development. Experience with tools such as Lambda, Glue, Step Functions, IAM roles, and various AWS services will be essential. Additionally, your expertise in building data pipelines using Apache Spark and AWS services will be highly valued. Strong analytical skills, troubleshooting abilities, and a proactive approach to learning new technologies are key attributes for success in this role. Furthermore, experience in designing and developing enterprise-level software solutions, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, Kinesis, and Glue Streaming will be advantageous. Effective communication and collaboration skills are essential to interact with cross-functional teams and document best practices. Your role will involve developing and delivering high-quality software solutions, collaborating with various stakeholders to define requirements, promoting a culture of code quality, and staying updated on industry trends. Adherence to secure coding practices, implementation of effective unit testing, and continuous improvement are integral parts of your responsibilities. As a Data Platform developer, you will be expected to lead and supervise a team, guide professional development, and ensure the delivery of work to a consistently high standard. Your impact will extend to related teams within the organization, and you will be responsible for managing risks, strengthening controls, and contributing to the achievement of organizational objectives. Ultimately, you will be part of a team that upholds Barclays" values of Respect, Integrity, Service, Excellence, and Stewardship, while embodying the Barclays Mindset of Empower, Challenge, and Drive in your daily interactions and work ethic.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Principal Engineer / Architect at our organization, you will be responsible for combining deep technical expertise with strategic thinking to design and implement scalable, secure, and modern digital systems. This senior technical leadership role requires hands-on architecture experience, a strong command of cloud-native development, and a successful track record of leading teams through complex solution delivery. Your role will involve collaborating with cross-functional teams including engineering, product, DevOps, and business stakeholders to define technical roadmaps, ensure alignment with enterprise architecture principles, and guide platform evolution. Key Responsibilities: Architecture & Design: - Lead the design of modular, microservices-based, and secure architecture for scalable digital platforms. - Define and enforce cloud-native architectural best practices using Azure, AWS, or GCP. - Prepare high-level design artefacts, interface contracts, data flow diagrams, and service blueprints. Cloud Engineering & DevOps: - Drive infrastructure design and automation using Terraform or CloudFormation. - Support Kubernetes-based container orchestration and efficient CI/CD pipelines. - Optimize for performance, availability, cost, and security using modern observability stacks and metrics. Data & API Strategy: - Architect systems that handle structured and unstructured data with performance and reliability. - Design APIs with reusability, governance, and lifecycle management in mind. - Guide caching, query optimization, and stream/batch data pipelines across the stack. Technical Leadership: - Act as a hands-on mentor to engineering teams, leading by example and resolving architectural blockers. - Review technical designs, codebases, and DevOps pipelines to uphold engineering excellence. - Translate strategic business goals into scalable technology solutions with pragmatic trade-offs. Key Requirements: Must Have: - 5+ years in software architecture or principal engineering roles with real-world system ownership. - Strong experience in cloud-native architecture with AWS, Azure, or GCP (certification preferred). - Programming experience with Java, Python, or Node.js, and frameworks like Flask, FastAPI, Celery. - Proficiency with PostgreSQL, MongoDB, Redis, and scalable data design patterns. - Expertise in Kubernetes, containerization, and GitOps-style CI/CD workflows. - Strong foundation in Infrastructure as Code (Terraform, CloudFormation). - Excellent verbal and written communication; proven ability to work across technical and business stakeholders. Nice to Have: - Experience in MLOps pipelines, observability stacks (ELK, Prometheus/Grafana), and tools like MLflow, Langfuse. - Familiarity with Generative AI frameworks (LangChain, LlamaIndex), Vector Databases (Milvus, ChromaDB). - Understanding of event-driven, serverless, and agentic AI architecture models. - Python libraries such as pandas, NumPy, PySpark and support for multi-component pipelines (MCP). Preferred: - Prior experience leading technical teams in regulated domains (finance, healthcare, govtech). - Cloud security, cost optimization, and compliance-oriented architectural mindset. What You'll Gain: - Work on mission-critical projects using the latest cloud, data, and AI technologies. - Collaborate with a world-class, cross-disciplinary team. - Opportunities to contribute to open architecture, reusable frameworks, and technical IP. - Career advancement via leadership, innovation labs, and enterprise architecture pathways. - Competitive compensation, flexibility, and a culture that values innovation and impact.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Cloud Performance QA Engineer at Tarana Wireless India, you will play a crucial role in evaluating the scalability, responsiveness, and resilience of the Tarana Cloud Suite, which encompasses cloud microservices, databases, and real-time communication with intelligent radio devices. Your responsibilities will include conducting performance, load, stress, and soak testing, as well as chaos testing and fault injection to ensure the robustness of the system under real-world and failure conditions. You will collaborate closely with development, DevOps, and SRE teams to proactively identify and address performance issues, analyze bottlenecks, and simulate production-like environments. Your work will involve a deep understanding of system internals, cloud infrastructure (AWS), and modern observability tools, and will directly impact the quality, reliability, and scalability of the next-gen wireless platform developed by Tarana. Key Responsibilities - Understand the Tarana Cloud Suite architecture, including microservices, UI, data/control flows, databases, and AWS-hosted runtime. - Design and implement robust load, performance, scalability, and soak tests using tools like Locust, JMeter, or similar. - Set up and manage scalable test environments on AWS to mimic production loads. - Build and maintain performance dashboards using Grafana, Prometheus, or other observability tools. - Analyze performance test results and infrastructure metrics to identify bottlenecks and optimization opportunities. - Integrate performance testing into CI/CD pipelines for automated baselining and regression detection. - Collaborate with cross-functional teams to define SLAs, set performance benchmarks, and resolve performance-related issues. - Conduct resilience and chaos testing using fault injection tools to validate system behavior under stress and failures. - Debug and root-cause performance degradations using logs, APM tools, and resource profiling. - Tune infrastructure parameters for improved efficiency. Required Skills & Experience - Bachelor's or Masters degree in Computer Science, Engineering, or a related field. - 3-8 years of experience in Performance Testing/Engineering. - Hands-on expertise with Locust, JMeter, or equivalent load testing tools. - Strong experience with AWS services such as EC2, ALB/NLB, CloudWatch, EKS/ECS, S3, etc. - Familiarity with Grafana, Prometheus, and APM tools like Datadog, New Relic, or similar. - Proficiency in scripting and automation (Python preferred) for custom test scenarios and analysis. - Experience with testing and profiling REST APIs, web services, and microservices-based architectures. - Exposure to chaos engineering tools or fault injection practices. - Experience with CI/CD tools and integrating performance tests into build pipelines. Nice to Have - Experience with Kubernetes-based environments and container orchestration. - Knowledge of infrastructure-as-code tools. - Background in network performance testing and traffic simulation. - Experience in capacity planning and infrastructure cost optimization. Join Tarana Wireless India's QA team and contribute to the advancement of fast, affordable internet access globally through cutting-edge technology and innovative solutions.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Database & Cloud Engineer at EY, you will play a crucial role in planning, designing, and implementing database solutions within cloud environments such as AWS, Azure, or GCP. Your primary responsibility will be to ensure that the database solutions align with business needs and performance requirements. You will actively monitor database performance, establish robust backup strategies, disaster recovery planning, and high availability configurations to maintain optimal system functionality. Your expertise will be essential in continuously optimizing database structures, schemas, indexes, and queries to enhance performance, efficiency, and reliability. It will be crucial to ensure adherence to security best practices, regulatory compliance standards, and data governance policies across all cloud database implementations. You will collaborate effectively with developers, DevOps teams, and stakeholders to execute database migrations, integration projects, and deliver seamless system interoperability. The ideal candidate for this role will have a maximum of 7 years of experience and possess extensive knowledge in managing relational and non-relational databases such as MySQL, PostgreSQL, Oracle, SQL Server, and MongoDB. You should also have proven hands-on experience with major cloud platforms (AWS, Azure, or GCP) and specific expertise in cloud-native database solutions. Proficiency with managed cloud database services including AWS RDS, Azure SQL Database, or Google Cloud SQL is essential. A deep understanding of database architecture, schema design, performance tuning methodologies, and comprehensive security measures is required for this position. Strong automation skills using scripting and tools like Terraform, Ansible, CloudFormation, or similar technologies will be beneficial. Excellent analytical and problem-solving capabilities with a strong attention to detail, along with outstanding communication and collaboration skills, are also essential qualities for success in this role. At EY, you will have the opportunity to build a career tailored to your unique strengths and aspirations. Join us in creating an exceptional experience for yourself while contributing to building a better working world for all. EY is committed to helping create long-term value for clients, people, and society, as well as building trust in the capital markets. With diverse teams in over 150 countries, EY leverages data and technology to provide trust through assurance and help clients grow, transform, and operate across various sectors including assurance, consulting, law, strategy, tax, and transactions. Together, we ask better questions to find new answers for the complex issues facing our world today.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

maharashtra

On-site

As a highly skilled Backend Developer, you will utilize your expertise in Kotlin and Java to design, develop, and deploy scalable backend services and microservices for modern cloud-native applications. Your key responsibilities will include building RESTful APIs, deploying applications on AWS, containerizing services using Docker and Kubernetes, implementing monitoring solutions, and optimizing performance and reliability. You will be expected to work closely with frontend developers, DevOps engineers, and product managers to ensure seamless integration and functionality. Your strong programming experience in Kotlin and Java, along with knowledge of RESTful APIs, AWS services, Kubernetes, Docker, and CI/CD pipelines will be essential in this role. Additionally, familiarity with databases, software engineering best practices, and design patterns is required. Preferred skills such as experience with reactive programming, Infrastructure as Code using Terraform or CloudFormation, event-driven architectures, and knowledge of secure coding practices and application monitoring tools are a plus. With 6-8 years of experience in Java Development, including Core Java, Hibernate, J2EE, JSP, and Kotlin, you are well-equipped to excel in this position.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You are an experienced and motivated DevOps and Cloud Engineer with a strong background in cloud infrastructure, automation, and continuous integration/delivery practices. Your role involves designing, implementing, and maintaining scalable, secure, and high-performance cloud environments on platforms like AWS, Azure, or GCP. You will collaborate closely with development and operations teams to ensure seamless workflow. Your key responsibilities include designing, deploying, and managing cloud infrastructure, building and maintaining CI/CD pipelines, automating infrastructure provisioning, monitoring system performance, managing container orchestration platforms, supporting application deployment, and ensuring security best practices in cloud and DevOps workflows. Troubleshooting and resolving infrastructure and deployment issues, along with maintaining up-to-date documentation for systems and processes, are also part of your role. To qualify for this position, you should have a Bachelor's degree in computer science, Engineering, or a related field, along with a minimum of 5 years of experience in DevOps, Cloud Engineering, or similar roles. Proficiency in scripting languages like Python or Bash, hands-on experience with cloud platforms, knowledge of CI/CD tools and practices, and familiarity with containerization and orchestration are essential. Additionally, you should have a strong understanding of cloud security and compliance standards, excellent analytical, troubleshooting, and communication skills. Preferred qualifications include certifications like AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or equivalent, as well as experience with GitOps, microservices, or serverless architecture. Join our technology team in Trivandrum and contribute to building and maintaining cutting-edge cloud environments while enhancing our DevOps practices.,

Posted 1 week ago

Apply

6.0 - 12.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer at Capgemini, you will have the opportunity to shape your career according to your aspirations in a supportive and inspiring environment. You will work with a collaborative global community of colleagues to push the boundaries of what is achievable. By joining us, you will play a key role in assisting the world's top organizations in harnessing the full potential of technology to create a more sustainable and inclusive world. Your responsibilities will include building and managing CI/CD pipelines using tools such as Jenkins, GitLab CI, and Azure DevOps. You will automate infrastructure deployment using Terraform, Ansible, or CloudFormation, and set up monitoring systems with Prometheus, Grafana, and ELK. Managing containers with Docker and orchestrating them through Kubernetes will be a crucial part of your role. Additionally, you will collaborate closely with developers to integrate DevOps practices into the Software Development Life Cycle (SDLC). To excel in this position, you should ideally possess 6 to 12 years of experience in DevOps, CI/CD, and Infrastructure as Code (IaC). Your expertise should extend to Docker, Kubernetes, and cloud platforms such as AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, and ELK is essential, along with knowledge of security, compliance, and performance aspects. Being ready for on-call duties and adept at handling production issues are also required skills for this role. At Capgemini, you will enjoy a flexible work environment with hybrid options, along with a competitive salary and benefits package. Your career growth will be supported through opportunities for SAP and cloud certifications. You will thrive in an inclusive and collaborative workplace culture that values teamwork and diversity. Capgemini is a global leader in business and technology transformation, facilitating organizations in their digital and sustainable evolution. With a diverse team of over 340,000 members across 50 countries, Capgemini leverages its 55-year legacy to deliver comprehensive services and solutions, ranging from strategy and design to engineering. The company's expertise in AI, generative AI, cloud, and data, combined with industry knowledge and partnerships, enables clients to unlock the true potential of technology to meet their business requirements effectively.,

Posted 1 week ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced Data Engineer with expertise in PySpark, Snowflake, and AWS, and you will be responsible for designing, developing, and optimizing data pipelines and workflows in a cloud-based environment. Your main focus will be leveraging AWS services, PySpark, and Snowflake for data processing and analytics. Your key responsibilities will include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, managing and configuring AWS services such as S3, Lambda, Glue, EMR, and Redshift, collaborating with data analysts and business teams to understand requirements and deliver solutions, ensuring data security and compliance with best practices in AWS and Snowflake environments, monitoring and troubleshooting data pipelines and workflows for performance and reliability, and writing efficient, reusable, and maintainable code for data processing and transformation. Required skills for this role include strong experience with AWS services (S3, Lambda, Glue, MSK, etc.), proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, a solid understanding of SQL and database optimization techniques, knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems (e.g., Git), strong problem-solving and debugging skills, experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, understanding of data governance and security best practices, and a certification in AWS or Snowflake is a plus. For education and experience, a Bachelors degree in Computer Science, Engineering, or related field with 6 to 10 years of experience is required, along with 5+ years of experience in AWS cloud engineering and 2+ years of experience with PySpark and Snowflake. This position falls under the Technology Job Family Group and the Digital Software Engineering Job Family, and it is a full-time role.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As an Enterprise IT Security Analyst Cloud and Endpoints, you will play a crucial role in ensuring the security of our cloud environments, specifically across AWS or Azure. Your primary responsibilities will revolve around collaborating with DevOps and IT teams to implement and oversee security measures, identify and mitigate risks, and ensure compliance with industry standards. Your key responsibilities will include: - Utilizing Microsoft Defender for Cloud and EDR tools like SentinelOne, CrowdStrike, or Microsoft Defender for Endpoint to enhance security measures. - Applying AI coding techniques for anomaly detection, threat prediction, and automated response systems. - Managing Microsoft Defender for Cloud to safeguard Azure environments. - Leveraging Endpoint Detection and Response (EDR) tools for threat detection and response. - Designing, implementing, and managing security solutions across AWS, Azure, and GCP. - Employing AWS security capabilities such as AWS Inspector, WAF, GuardDuty, and IAM for cloud infrastructure protection. - Implementing Azure security features including Azure Security Center, Azure Sentinel, and Azure AD. - Managing security configurations and policies across GCP using tools like Google Cloud Armor, Security Command Center, and IAM. - Conducting regular security assessments and audits to ensure vulnerability identification and compliance. - Developing and maintaining security policies, procedures, and documentation. - Collaborating with cross-functional teams to integrate security best practices into the development lifecycle. - Monitoring and responding to security incidents and alerts. - Implementing and managing Cloud Security Posture Management (CSPM) solutions with tools like Prisma Cloud, Dome9, and AWS Security Hub to continuously enhance cloud security posture. - Utilizing Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, and ARM templates for cloud infrastructure automation and management. Qualifications: Must Have Qualifications: - Bachelor's degree in computer science, Information Technology, or a related field. - 1-3 years of experience in cloud security engineering. - Proficiency in AWS security capabilities. - Strong skills in Terraform for Infrastructure as Code (IaC). - Experience with Cloud Security Posture Management (CSPM) tools. - Familiarity with Web Application Firewall (WAF). - Relevant certification such as CISSP or AWS Certified Security Specialty or similar. Good to Have Qualifications: - Additional experience with AWS security capabilities. - Strong understanding of cloud security frameworks and best practices. - Proficiency in Infrastructure as Code (IaC) tools like CloudFormation and ARM templates. - Experience with AI coding and applying machine learning techniques to security. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills. This role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore. The position follows a hybrid work model with office presence on Tuesdays, Wednesdays, and Thursdays, and remote work on Mondays and Fridays. The work timings are from 1 PM to 10 PM IST, with cab pickup and drop facility available. Candidates based in Bangalore are preferred.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

jodhpur, rajasthan

On-site

The ideal candidate for this position should have the following qualifications and experience: Backend Requirements: - Possess at least 5 years of experience working with Python. - Demonstrated hands-on experience with at least one of the following frameworks: Flask, Django, or FastAPI. - Proficient in utilizing various AWS services, including Lambda, S3, SQS, and CloudFormation. - Skilled in working with relational databases such as PostgreSQL or MySQL. - Familiarity with testing frameworks like Pytest or NoseTest. - Expertise in developing REST APIs and implementing JWT authentication. - Proficient in using version control tools such as Git. Frontend Requirements: - Have a minimum of 3 years of experience with ReactJS. - Thorough understanding of ReactJS and its core principles. - Experience in working with state management tools like Redux Thunk, Redux Saga, or Context API. - Familiarity with RESTful APIs and modern front-end build pipelines and tools. - Proficient in HTML5, CSS3, and pre-processing platforms like SASS/LESS. - Experience in implementing modern authorization mechanisms, such as JSON Web Tokens (JWT). - Knowledge of front-end testing libraries like Cypress, Jest, or React Testing Library. - Bonus points for experience in developing shared component libraries. If you meet the above criteria and are looking to work in a dynamic environment where you can utilize your backend and frontend development skills effectively, we encourage you to apply for this position.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies