Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3 - 7 years
4 - 7 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
3 - 6 years
6 - 10 Lacs
Chennai
Work from Office
Role Summary As part of our AI-first strategy at Creatrix Campus , you'll play a critical role in deploying, optimizing, and maintaining Large Language Models (LLMs) like LLaMA, Mistral, and CodeS across our SaaS platform. This role is not limited to experimentationit is about operationalizing AI at scale. Youll ensure our AI services are reliable, secure, cost-effective, and product-ready for higher education institutions in 25+ countries. Youll work across infrastructure (cloud and on-prem), MLOps, and performance optimization while collaborating with software engineers, AI developers, and product teams to embed LLMs into real-world applications like accreditation automation, intelligent student forms, and predictive academic advising. Key Responsibilities LLM Deployment & Optimization Deploy, fine-tune, and optimize open-source LLMs (e.g., LLaMA, Mistral, CodeS, DeepSeek). Implement quantization (e.g., 4-bit, 8-bit) and pruning for efficient inference on commodity hardware. Build and manage inference APIs (REST/gRPC) for production use. Infrastructure Management Set up and manage on-premise GPU servers and VM-based deployments. Build scalable cloud-based LLM infrastructure using AWS (SageMaker, EC2), Azure ML, or GCP Vertex AI. Ensure cost efficiency by choosing appropriate hardware and job scheduling strategies. MLOps & Reliability Engineering Develop CI/CD pipelines for model training, testing, evaluation, and deployment. Integrate version control for models, data, and hyperparameters. Set up logging, tracing, and monitoring tools (e.g., MLflow, Prometheus, Grafana) for model performance and failure detection. Security, Compliance & Performance Ensure data privacy (FERPA/GDPR) and enforce security best practices across deployments. Apply secure coding standards and implement RBAC, encryption, and network hardening for cloud/on-prem. Cross-functional Integration Work closely with AI solution engineers, backend developers, and product owners to integrate LLM services into the platform. Support performance benchmarking and A/B testing of AI features across modules Documentation & Internal Enablement Document LLM pipelines, configuration steps, and infrastructure setup in internal playbooks. Create guides and reusable templates for future deployments and models. Required Qualifications Education: Bachelors or Masters in Computer Science, AI/ML, Data Engineering, or related field. Technical Skills: Strong Python experience with ML libraries (e.g., PyTorch, Hugging Face Transformers). Familiar with LangChain, LlamaIndex, or other RAG frameworks. Experience with Docker, Kubernetes, and API gateways (e.g., Kong, NGINX). Working knowledge of vector databases (FAISS, Pinecone, Qdrant). Familiarity with GPU deployment tools (CUDA, Triton Inference Server, HuggingFace Accelerate). Experience: 3+ years in an AI/MLOps role, including experience in LLM fine-tuning and deployment. Hands-on work with model inference in production environments (both cloud and on-prem). Exposure to SaaS and modular product environments is a plus.
Posted 1 month ago
10 - 14 years
12 - 16 Lacs
Mumbai
Work from Office
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Assoc Mgr Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 10 to 14 years What would you do? Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for? Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience – Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following – Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains:Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications:Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Leading team of data scientists to build and deploy data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Refining and improving data science models based on feedback, new data, and evolving business needs. Analyze available data to identify opportunities for enhancing brand equity, improving retail margins, achieving profitable growth, and expanding market share for clients. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment. Qualifications Master of Engineering,Masters in Business Economics
Posted 1 month ago
7 - 9 years
19 - 25 Lacs
Bengaluru
Work from Office
About The Role Job Title: Industry & Function AI Decision Science Manager + S&C GN Management Level:07 - Manager Location: Primary Bengaluru, Secondary Gurugram Must-Have Skills: Consumer Goods & Services domain expertise , AI & ML, Proficiency in Python, R, PySpark, SQL , Experience in cloud platforms (Azure, AWS, GCP) , Expertise in Revenue Growth Management, Pricing Analytics, Promotion Analytics, PPA/Portfolio Optimization, Trade Investment Optimization. Good-to-Have Skills: Experience with Large Language Models (LLMs) like ChatGPT, Llama 2, or Claude 2 , Familiarity with optimization methods, advanced visualization tools (Power BI, Tableau), and Time Series Forecasting Job Summary :As a Decision Science Manager , you will lead the design and delivery of AI solutions in the Consumer Goods & Services domain. This role involves working closely with clients to provide advanced analytics and AI-driven strategies that deliver measurable business outcomes. Your expertise in analytics, problem-solving, and team leadership will help drive innovation and value for the organization. Roles & Responsibilities: Analyze extensive datasets and derive actionable insights for Consumer Goods data sources (e.g., Nielsen, IRI, EPOS, TPM). Evaluate AI and analytics maturity in the Consumer Goods sector and develop data-driven solutions. Design and implement AI-based strategies to deliver significant client benefits. Employ structured problem-solving methodologies to address complex business challenges. Lead data science initiatives, mentor team members, and contribute to thought leadership. Foster strong client relationships and act as a key liaison for project delivery. Build and deploy advanced analytics solutions using Accenture's platforms and tools. Apply technical proficiency in Python, Pyspark, R, SQL, and cloud technologies for solution deployment. Develop compelling data-driven narratives for stakeholder engagement. Collaborate with internal teams to innovate, drive sales, and build new capabilities. Drive insights in critical Consumer Goods domains such as: Revenue Growth Management Pricing Analytics and Pricing Optimization Promotion Analytics and Promotion Optimization SKU Rationalization/ Portfolio Optimization Price Pack Architecture Decomposition Models Time Series Forecasting Professional & Technical Skills: Proficiency in AI and analytics solutions (descriptive, diagnostic, predictive, prescriptive, generative). Expertise in delivering large scale projects/programs for Consumer Goods clients on Revenue Growth Management - Pricing Analytics, Promotion Analytics, Portfolio Optimization, etc. Deep and clear understanding of typical data sources used in RGM programs POS, Syndicated, Shipment, Finance, Promotion Calendar, etc. Strong programming skills in Python, R, PySpark, SQL, and experience with cloud platforms (Azure, AWS, GCP) and proficient in using services like Databricks and Sagemaker. Deep knowledge of traditional and advanced machine learning techniques, including deep learning. Experience with optimization techniques (linear, nonlinear, evolutionary methods). Familiarity with visualization tools like Power BI, Tableau. Experience with Large Language Models (LLMs) like ChatGPT, Llama 2. Certifications in Data Science or related fields. Additional Information: The ideal candidate has a strong educational background in data science and a proven track record in delivering impactful AI solutions in the Consumer Goods sector. This position offers opportunities to lead innovative projects and collaborate with global teams. Join Accenture to leverage cutting-edge technologies and deliver transformative business outcomes. About Our Company | Qualifications Experience: Minimum 7-9 years of experience in data science, particularly in the Consumer Goods sector Educational Qualification: Bachelors or Masters degree in Statistics, Economics, Mathematics, Computer Science, or MBA (Data Science specialization preferred)
Posted 1 month ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Architecture Good to have skills : Amazon Web Services (AWS) Minimum 3 year(s) of experience is required Educational Qualification : 15 years full term education Job Title:AWS Data Engineer About The Role ::We are seeking a skilled AWS Data Engineer with expertise in AWS services such as Glue, Lambda, SageMaker, CloudWatch, and S3, coupled with strong Python/PySpark development skills. The ideal candidate will have a solid grasp of ETL concepts, proficient in writing complex SQL queries, and capable of handling client interactions independently. They should demonstrate a track record of efficiently resolving tickets, tasks, bugs, and enhancements within stipulated timelines. Good communication skills are essential, and basic knowledge of databases is preferred. Project Role Description :Design, build and configure applications to meet business process and application requirements. Must-Have Skills:AWS GlueAWS ArchitechtureAWS Expertise:Python/PySpark Development:SQL Mastery:Advanced knowledge of SQLClient Handling:Problem-Solving Skills:xCommunication Skills:Responsibilities:Develop and maintain AWS-based data solutions utilizing services like Glue, Lambda, SageMaker, CloudWatch,DynamoDb and S3. Implement ETL processes effectively within Glue jobs and PySpark scripts, ensuring optimal performance and reliability. Proficiently write and optimize complex SQL queries to extract, transform, and load data from various sources. Independently handle client interactions, understanding requirements, providing technical guidance, and ensuring client satisfaction. Resolve tickets, tasks, bugs, and enhancements promptly, meeting defined resolution timeframes. Communicate effectively with team members, stakeholders, and clients, providing updates, reports, and insights as required. Maintain a basic understanding of databases, supporting data-related activities and troubleshooting when necessary. Stay updated with industry trends, AWS advancements, and best practices, contributing to continuous improvement initiatives within the team. Requirements:Bachelor's degree in Computer Science, Engineering, or a related field.Proven experience working with AWS services, particularly Glue, Lambda, SageMaker, CloudWatch, and S3.Strong proficiency in Python and/or PySpark development for data processing and analysis.Solid understanding of ETL concepts, databases, and data warehousing principles.Excellent problem-solving skills and ability to work independently or within a team.Outstanding communication skills, both verbal and written, with the ability to interact professionally with clients and colleagues.Ability to manage multiple tasks concurrently and prioritize effectively in a dynamic work environment.Good to have:Basic knowledge of relational databases such as MySQL, PostgreSQL, or SQL Server. Educational Qualification:Bachelor Degree Qualifications 15 years full term education
Posted 1 month ago
5 - 10 years
27 - 30 Lacs
Kochi, Thiruvananthapuram
Work from Office
We are seeking a highly skilled and independent Senior Machine Learning Engineer Contractor to design, develop, and deploy advanced ML pipelines in an AWS environment. Key Responsibilities: Design, develop, and deploy robust and scalable machine learning models. Build and maintain ML pipelines for data preprocessing, model training, evaluation, and deployment. Collaborate with data scientists, data engineers, and product teams to identify ML use cases and develop prototypes. Optimize models for performance, accuracy, and scalability in real-time or batch systems. Monitor and troubleshoot deployed models to ensure ongoing performance. Location - Kochi, Trivandrum,Remote.
Posted 1 month ago
3 - 5 years
0 - 0 Lacs
Kochi
Work from Office
Job Summary: We are seeking a highly skilled Senior Python Developer with expertise in Machine Learning (ML) , Large Language Models (LLMs) , and cloud technologies . The ideal candidate will be responsible for end-to-end execution -- from requirement analysis and discovery to the design, development, and implementation of ML-driven solutions. The role demands both technical excellence and strong communication skills to work directly with clients, delivering POCs, MVPs, and scalable production systems. Key Responsibilities: Collaborate with clients to understand business needs and identify ML-driven opportunities. Independently design and develop robust ML models, time series models, deep learning solutions, and LLM-based systems. Deliver Proof of Concepts (POCs) and Minimum Viable Products (MVPs) with agility and innovation. Architect and optimize Python-based ML applications focusing on performance and scalability. Utilize GitHub for version control, collaboration, and CI/CD automation. Deploy ML models on cloud platforms such as AWS, Azure, or GCP . Follow best practices in software development including clean code, automated testing, and thorough documentation. Stay updated with evolving trends in ML, LLMs, and cloud ecosystem. Work collaboratively with Data Scientists, DevOps engineers, and Business Analysts. Must-Have Skills: Strong programming experience in Python and frameworks such as FastAPI, Flask, or Django . Solid hands-on expertise in ML using Scikit-learn, TensorFlow, PyTorch, Prophet , etc. Experience with LLMs (e.g., OpenAI, LangChain, Hugging Face , vector search). Proficiency in cloud services like AWS (S3, Lambda, SageMaker) , Azure ML , or GCP Vertex AI . Strong grasp of software engineering concepts: OOP, design patterns, data structures . Experience in version control systems ( Git/GitHub/GitLab ) and setting up CI/CD pipelines . Ability to work independently and solve complex problems with minimal supervision. Excellent communication and client interaction skills. Required Skills Python,Machine Learning,Machine Learning Models
Posted 1 month ago
5 - 10 years
10 - 20 Lacs
Pune
Hybrid
Experienced in AI Ops Engineer role focuses on deploying, monitoring, and scaling AI/GenAI models using MLOps, CI/CD, cloud (AWS/Azure/GCP), Python, Kubernetes, MLflow, security, and automation.
Posted 1 month ago
5 - 10 years
8 - 18 Lacs
Pune
Hybrid
Experienced AI Engineer with 4+ years in deploying scalable ML solutions on cloud platforms like AWS, Azure, and GCP and Skilled in Python, SQL, Kubernetes, and MLOps practices including CI/CD and model monitoring.
Posted 1 month ago
6 - 10 years
15 - 19 Lacs
Bengaluru
Work from Office
As a Principal Data Engineer on the Marketplace team, you will be responsible for analysing and interpreting complex datasets to generate insights that directly influence business strategy and decision-making. You will apply advanced statistical analysis and predictive modelling techniques to identify trends, predict future outcomes, and assess data quality. These insights will drive data-driven decisions and strategic initiatives across the organization. The Marketplace team is responsible for building the services where our customers will go to purchase pre-configured software installations on the platform of their choice. The challenges here are across the entire stack, from back-end distributed services operating at cloud scale, to e-commerce transactions, to the actual web apps that users interact with. This is the perfect role for someone experienced in designing distributed systems, writing and debugging code across an entire stack (UI, APIs, databases, cloud infrastructure services), championing operational excellence, mentoring junior engineers, driving development process improvements and excellence in a start-up style environment. Career Level - IC4 Responsibilities As a Principal Data Engineer, you will be at the forefront of Oracles data initiatives, playing a pivotal role in transforming raw data into actionable insights. Collaborating with data scientists and business stakeholders, you will design scalable data pipelines, optimize data infrastructure, and ensure the availability of high-quality datasets for strategic analysis. This role goes beyond data engineering, requiring hands-on involvement in statistical analysis and predictive modeling. You will use techniques such as regression analysis, trend forecasting, and time-series modeling to extract meaningful insights from data, directly supporting business decision-making. Basic Qualifications: 7+ years of experience in data engineering and analytics, with a strong background in designing scalable database architectures, building and optimizing data pipelines, and applying statistical analysis to deliver strategic insights across complex, high-volume data environments Deep knowledge of big data frameworks such as Apache Spark, Apache Flink, Apache Airflow, Presto, Kafka, and data warehouse solutions. Experience working with other cloud platform teams and accommodating requirements from those teams (compute, networking, search, store). Excellent written and verbal communication skills with the ability to present complex information in a clear, concise manner to all audiences Design and optimize database structures to ensure scalability, performance, and reliability within Oracle ADW and OCI environments. This includes maintaining schema integrity, managing database objects, and implementing efficient table structures that support seamless reporting and analytical needs. Build and manage data pipelines that automate the flow of data from diverse sources into Oracle databases, using ETL processes to transform data for analysis and reporting. Conduct data quality assessments, identify anomalies, and validate the accuracy of data ingested into our systems. Working alongside data governance teams, you will establish metrics to measure data quality and implement controls to uphold data integrity, ensuring reliable data for stakeholders. Mentor junior team members and share best practices in data analysis, modeling, and domain expertise. Preferred Qualifications: Solid understanding of statistical methods, hypothesis testing, data distribution, regression analysis, and probability. Proficiency in Python for data analysis and statistical modeling. Experience with libraries like pandas, NumPy, and SciPy. Knowledge of methods and techniques for data quality assessment, anomaly detection, and validation processes. Skills in defining data quality metrics, creating data validation rules, and implementing controls to monitor and uphold data integrity. Familiarity with visualization tools (e.g., Tableau, Power BI, Oracle Analytics Cloud) and libraries (e.g., Matplotlib, Seaborn) to convey insights effectively. Strong communication skills for collaborating with stakeholders and translating business goals into technical data requirements. Ability to contextualize data insights in business terms and to present findings to non-technical stakeholders in a meaningful way. Ability to cleanse, transform, and aggregate data from various sources, ensuring its ready for analysis. Experience with relational database management and design, specifically in Oracle environments (e.g., Oracle Autonomous Data Warehouse, Oracle Database). Skills in designing, maintaining, and optimizing database schemas to ensure efficiency, scalability, and reliability. Advanced SQL skills for complex queries, indexing, stored procedures, and performance tuning. Experience with ETL tools such as Oracle Data Integrator (ODI), or other data integration frameworks.
Posted 1 month ago
3 - 6 years
11 - 19 Lacs
Hyderabad
Work from Office
Job Description : Data Engineer at Mirabel Technologies should be an avid programmer of Java, Python, R or Scala with expertise in implementing complex algorithms. Data Engineer will work on collecting, storing, processing, and analyzing huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company in various products. Skillset : 1. Proficient understanding of distributed computing principles 2. Ability to build, run and manage large clusters 3. Hadoop v2, MapReduce, HDFS 4. Java, Python 5. Large Scale crawling: Scrapy, Nutch and custom crawling solutions 6. Experience with Apache Solr Lucene 7. NoSQL databases, such as MongoDB, HBase, Cassandra 8. Knowledge of various ETL techniques and frameworks, such as Flume 9. Experience with NLP tools and systems for POS, NER, and Information extraction 10. Experience with Machine Learning - Regression, Classification, Decision Trees. 11. Experience with Linux / AWS. Experience : 4-5 years Key skills: NLP, LLM, AWS Sagemaker, Deep Learning, and No Sql Database.
Posted 1 month ago
3 - 6 years
5 - 8 Lacs
Pune
Work from Office
Technical Infrastructure: Heres just some of what we use: AWS (EC2, IAM, EKS etc.), Terraform Enterprise, Docker, Kubernetes, Aurora, Mesos, HashiCorp Vault and Consul Datadog and PagerDuty Microservices architecture, Spring, Java & NodeJS, React, Koa, Express.js. Amazon RDS, Dynamo DB, Postgres, Oracle, MySQL, GitHub, Jenkins, Concourse CI , Jfrog Artifactory About the role: You will constantly be asking; what are the most important infrastructure problems we need to solve for today that will increase our applications and infrastructures reliability and performance. You will apply your deep technical knowledge, taking a broad look at our technology infrastructure. Youll help us identify common and systematic issues and validate these, prioritizing which to strategically address first. We value collaboration. So, you will partner with our SRE/DevOps team, discussing and refining your ideas and preparing proof of concepts. You will present and validate these across technology teams, figuring out the best solution and youll be given ownership to engineer and implement your solutions. Theres lot of interesting technology problems for you to solve, so you are constantly applying latest thinking. These include, implementing Canary, designing a new automated pipeline solution, extension of Kubernetes capabilities, implementation of machine learning to build load testing, ensuring mutability of containerization etc. You will get to evaluate existing technologies and design the future state without being afraid to challenge the status quo. And youll regularly review existing infrastructure, looking for opportunities to improve (E.g. service improvements, cost reduction, security, performance). Youll also get to automate everything necessary, combining reliability with a pragmatic approach, doing it right, first time. Were continuing our journey of making our code and configuration deployments self-serve for our development teams. Youll help us build and maintain the right tooling and youll have ownership to design and implement the infrastructure needed Youll also be involved in the daily management of our AWS infrastructure. This means working with our Agile development teams, to troubleshoot server, application, and performance issues Skills & Experience: Relevant 3 to 6 years hands-on SRE/DevOps experience in an Agile environment Substantial experience with AWS services in a production environment. Demonstrated expertise in managing and modernizing legacy systems and infrastructure. Youll be able to collaborate effectively with both engineers and operations, and be comfortable recommending best practices You have the expertise and skills to navigate the AWS ecosystem and will know when and where to recommend the most appropriate service, and/or usage pattern. You have experienced resolving outages, and are able to quickly diagnose issues and been instrumental in restoring normal service levels You have an intellectual curiosity, and an appetite to learn more Strong hands-on experience working with Linux environments; Windows experience is a plus. Strong Proficiency in scripting languages (e.g., Bash, Python) for automation and process optimization. Experience with CI/CD tools such as Jenkins, GitHub Actions, Concourse CI preferably. Expertise in containerization technologies like Docker and orchestration tools such as Kubernetes. Practical experience managing event-driven systems, messaging queues, and load balancers. Strong understanding of monitoring, logging, and observability tools to ensure system reliability. Good to have Datadog, Pager duty exposure. Proven ability to troubleshoot critical outages, identify root causes, and restore service quickly. Proficiency in HashiCorp technologies including Terraform IaC, Vault (Secret management) and Consul (service discovery and config management). Youll also have significant experience and/or an interest in the following: Managing cloud infrastructure as code preferably using Terraform Application Container Management and orchestration primarily in Kubernetes environments preferably AWS EKS Maintaining managed databases including AWS RDS. Experience in how to tune, scale and how performance and reliability are achieved. Good understanding of PKI infrastructure and CDN technologies including AWS Cloud front Expertise in AWS security including AWS IAM service. Experience with AWS lambda, AWS Sagemaker. Experience working and strong understanding with firewalls, network and application load balancing. A strong and informed point of view with respect to monitoring tools and how best to use them. Ability to work cloud-based environments spanning multiple AWS accounts management and integration.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France