Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less
Posted 5 hours ago
3.0 - 4.0 years
0 Lacs
Surat, Gujarat, India
On-site
Job Title - DevOps Engineer Location - Surat (On-site ) Experience - 3-4 years Job Summary: We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely. Roles & Responsibilities: Strong experience with essential DevOps tools and technologies including Kubernetes , Terraform , Azure DevOps , Jenkins , Maven , Git , GitHub , and Docker . Hands-on experience in Azure cloud services , including: Virtual Machines (VMs) Blob Storage Virtual Network (VNet) Load Balancer & Application Gateway Azure Resource Manager (ARM) Azure Key Vault Azure Functions Azure Kubernetes Service (AKS) Azure Monitor, Log Analytics, and Application Insights Azure Container Registry (ACR) and Azure Container Instances (ACI) Azure Active Directory (AAD) and RBAC Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers. Build and maintain CI/CD pipelines using Azure DevOps , Jenkins , and scripting for scalable SaaS deployments. Develop automation and infrastructure-as-code (IaC) using Terraform , ARM Templates , or Bicep for managing and provisioning cloud resources. Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS). Proficient in setting up monitoring , logging , and alerting systems using Azure-native tools and integrating with third-party observability stacks. Experience implementing auto-scaling , load balancing , and high-availability strategies for cloud-native SaaS applications. Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing , compliance , and secure deployments . Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments. Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation. Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications. Skilled in GitHub repository management , including setting up project-specific access, enforcing code quality standards, and managing pull requests. Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications. Ability to design and maintain scalable , resilient , and secure infrastructure to support rapid growth of SaaS applications. Qualifications & Requirements: Proven experience as a DevOps Engineer , Site Reliability Engineer , or in a similar software engineering role. Strong experience working in SaaS environments with a focus on scalability, availability , and performance . Proficiency in Python or Ruby for scripting and automation. Working knowledge of SQL and database management tools. Strong analytical and problem-solving skills with a collaborative and proactive mindset. Familiarity with Agile methodologies and ability to work in cross-functional teams . Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role We're seeking an experienced Infrastructure Engineer to join our platform team, handling massive-scale data processing and analytics infrastructure that supports over 5B+ events and more 5M+ DAU .We’re looking for someone who can help us scale gracefully while optimizing for performance, cost, and resiliency. Key Responsibilities Design, implement, and manage our AWS infrastructure, with a strong emphasis on automation, resiliency, and cost-efficiency. Develop and oversee scalable data pipelines (for event processing, transformation, and delivery). Implement and manage stream processing frameworks (such as Kinesis, Kafka, or MSK). Handle orchestration and ETL workloads, employing services like AWS Glue, Athena, Databricket, Redshift, or Apache Airflow. Implement robust network, storage, and backup strategies for growing workloads. Monitor, debug, and resolve production issues related to data and infrastructure in real time. Implement IAM controls, logging, alerts, and Security Best Practices across all components. Provide deployment automation (Docker, Terraform, CloudFormation) and collaborate with application engineers to enable smooth delivery. Build SOP for support and setup a functioning 24*7 support system (including hiring right engineers) to ensure system uptime and availability Required Technical Skills 5+ years of experience with AWS services (VPC, EC2, S3, Security Groups, RDS, Kinesis, MSK, Redshift, Glue). Experience designing and managing large-scale data pipelines with high-throughput workloads. Ability to handle 5 billion events/day and 1M+ concurrent users’ workloads gracefully. Familiar with scripting (Python, Terraform) and automation practices (Infrastructure as Code). Familiar with network fundamentals, Linux, scaling strategies, and backup routines. Collaborative team player — able to work with engineers, data analysts, and stakeholders. Preferred Tools & Technologies AWS: EC2, S3, VPC, Security Groups, RDS, Redshift, DocumentDB, MSK, Glue, Athena, CloudWatch Infrastructure as Code: Terraform, CloudFormation Scripted automation: Python, Bash Container orchestration: Docker, ECS or EKS Workflow orchestration: Apache Airflow, Dagster Streaming framework: Apache Kafka, Kinesis, Flink Other: Linux, Git, Security best practices (IAM, Security Groups, ACM) Education Bachelor's/Master's degree in Computer Science, Data Science, or related field Relevant professional certifications in cloud platforms or data technologies Why Join Us? Opportunity to work in a fast-growing audio and content platform. Exposure to multi-language marketing and global user base strategies. A collaborative work environment with a data-driven and innovative approach. Competitive salary and growth opportunities in marketing and growth strategy. Success Metrics ✅ Scalability: Ability to handle 1+ billion events/day with low latency and high resiliency. ✅ Cost-efficiency: Reduction in AWS operational costs by optimizing services, storage, and data transfer. ✅ Uptime/SLI: Achieve 99.9999% platform and pipeline uptimes with automated fallback mechanisms. ✅ Data delivery latency: Reduce event delivery latency to under 5 minutes for real-time processing. ✅ Security and compliance: Implement controls to pass PCI-DSS or SOC 2 audits with zero major findings. ✅ Developer productivity: Improve team delivery speed by self-service IaC modules and automated routines. About KUKU Founded in 2018, KUKU is India’s leading storytelling platform, offering a vast digital library of audio stories, short courses, and microdramas. KUKU aims to be India’s largest cultural exporter of stories, culture and history to the world with a firm belief in “Create In India, Create For The World”. We deliver immersive entertainment and education through our OTT platforms: Kuku FM, Guru, Kuku TV, and more. With a mission to provide high-quality, personalized stories across genres from entertainment across multiple formats and languages, KUKU continues to push boundaries and redefine India’s entertainment industry. 🌐 Website: www.kukufm.com 📱 Android App: Google Play 📱 iOS App: App Store 🔗 LinkedIn: KUKU 📢 Ready to make an impact? Apply now Skills: aws services,bash,networking,kafka,data pipeline,docker,kinesis,data pipelines,etl,terraform,automation,aws,security,ec2,cloudformation,cloud,scripting,linux,infrastructure,amazon redshift,python,vpc,network fundamentals,workflow orchestration,stream processing frameworks,container orchestration,dagster,airflow,s3 Show more Show less
Posted 5 hours ago
2.0 years
0 Lacs
Gautam Buddha Nagar, Uttar Pradesh, India
On-site
We are seeking a dynamic and experienced Technical Trainer to join our engineering department. The ideal candidate will be responsible for designing and delivering technical training sessions to B.Tech students across various domains, ensuring they are industry-ready and equipped with practical, job-oriented skills. Role & Responsibility To train the students in new age technology (computer Science Engineering) to bridge the industry & academia gap leading to increase in the employability of the students. Knowledge Proven experience in devising technical training programs to UG/PG Engineering students in Higher Education Institutions To be abreast in latest software as per Industry standard & having knowledge of modern training techniques and tools to deliver the technical subjects To prepare training material (presentations, worksheets etc.) To execute training sessions, webinars, workshops for students To determine overall effectiveness of programs and make improvements Technical Skills (Subject Areas of delivering Training with Practical Approach) 1. Core Programming Skills Languages: C, Python, Java, C++, JavaScript 2. Web Development Frontend: HTML, CSS, JavaScript, React.js/Next.js Backend: Node.js, Express, Django, or Spring Boot Full-Stack: MERN stack (MongoDB, Express, React, Node.js) 3. Data Science & Machine Learning Languages: Python (NumPy, pandas, scikit-learn, TensorFlow/PyTorch) Tools: Jupyter Notebook, Google Colab, MLFlow 4. AI & Generative AI LLMs (Large Language Models): Understand how GPT, BERT, Llama models work Prompt Engineering Fine-tuning & RAG (Retrieval-Augmented Generation) Hugging Face Transformers, LangChain, OpenAI APIs 5. Cloud Computing & DevOps Cloud Platforms: AWS, Microsoft Azure, Google Cloud Platform (GCP) DevOps Tools: Docker, Kubernetes, GitHub Actions, Jenkins, Terraform CI/CD Pipelines: Automated testing and deployment 6. Cybersecurity Basics: OWASP Top 10, Network Security, Encryption, Firewalls Tools: Wireshark, Metasploit, Burp Suite 7. Mobile App Development Native: Kotlin (Android), Swift (iOS) Cross-platform: Flutter, React Native 8. Blockchain & Web3 Technologies: Ethereum, Solidity, Smart Contracts Frameworks: Hardhat, Truffle 9. Database & Big Data Databases: SQL (MySQL, PostgreSQL), NoSQL (MongoDB, Redis) Big Data Tools: Apache Hadoop, Spark, Kafka Qualification & Years of Experience as per norms: B.Tech./MCA/M.Tech (IT/CSE) from Top tier Institutes & reputed universities Industry Experience is desirable. Candidate must have minimum 2 years of training experience in the same domain. Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: As part of the Cloud network team in Thomson Reuters you will work on delivering world class infrastructure services to our customers using latest technologies. We are looking for Senior Network Cloud Engineer who can help us design and implement secure, scalable, highly available network architectures in AWS, Azure, OCI & GCP. You will be working in agile teams and will get opportunity to learn new technologies and tools. About the Role: In this role as a Senior Network Cloud Engineer, you will: Work closely with Architecture and business teams to understand their requirements and translate them into robust, reliable and highly available network designs. Collaborate with security team to ensure compliance with security policies and best practices. Design, provision and configure networks in all cloud providers. Implement automation solutions to reduce manual intervention and increase efficiency. Participate in on call support activities and perform post implementation reviews to identify any issues or room for improvement. Stay up to date with the latest trends and advancements in cloud computing and related technologies. Maintain documentation of system designs, configurations and procedures. Contribute to knowledge base articles and technical guides. Actively participate in code reviews, sprint ceremonies and other Agile/Scrum activities. About You: You're a fit for the role of Senior Network Cloud Engineer if your background includes: Bachelor’s degree in computer science, information technology or related field. Master’s degree preferred but not required. At least 5 years of experience in designing, implementing and managing large scale network architectures in public clouds (AWS, Azure, Google). Strong understanding of network protocols such as TCP/IP, DNS, HTTP, SSL etc. Experience with configuration management tools such as Terraform, Ansible, Chef, Puppet etc. Excellent scripting skills using Python, PowerShell, Bash etc. Proficiency in at least one object-oriented programming language like Java, C#, Python etc. Familiarity with automated testing frameworks such as Junit, NUnit, Pytest etc. Practical experience writing unit tests and integration tests. Understanding of continuous integration and continuous deployment pipelines. Knowledge of version control systems such as Git. Ability to communicate effectively both verbally and written. Team player mentality with ability to collaborate across multiple disciplines. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 5 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer III Full-time McDonald's Office Location: Hyderabad Global Grade: G4 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 5 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 5 hours ago
4.0 years
0 Lacs
Kerala, India
Remote
About FriskaAi FriskaAi is a powerful AI-enabled, EHR-agnostic platform designed to help healthcare providers adopt an evidence-based approach to care. Our technology addresses up to 80% of chronic diseases, including obesity and type 2 diabetes, enabling better patient outcomes. 📍 Location: Remote 💼 Job Type: Full-Time Job Description We are seeking a highly skilled Backend Developer to join our team. The ideal candidate will have expertise in Python and Django , with experience in SQL and working in a cloud-based environment on Microsoft Azure . You will be responsible for designing, developing, and optimizing backend systems that drive our healthcare platform and ensure seamless data flow and integration. Key Responsibilities Backend Development Develop and maintain scalable backend services using Python and Django. Build and optimize RESTful APIs for seamless integration with frontend and third-party services. Implement efficient data processing and business logic to support platform functionality. Database Management Design and manage database schemas using Azure SQL or PostgreSQL. Write and optimize SQL queries, stored procedures, and functions. Ensure data integrity and security through proper indexing and constraints. API Development & Integration Develop secure and efficient RESTful APIs for frontend and external integrations. Ensure consistent and reliable data exchange between systems. Optimize API performance and scalability. Cloud & Infrastructure Deploy and manage backend applications on Azure App Service and Azure Functions. Set up and maintain CI/CD pipelines using Azure DevOps. Implement monitoring and logging using Azure Application Insights. Microservices Architecture Design and implement microservices to modularize backend components. Ensure smooth communication between services using messaging queues or REST APIs. Optimize microservices for scalability and fault tolerance. Testing & Debugging Write unit and integration tests using Pytest. Debug and resolve production issues quickly and efficiently. Ensure code quality and reliability through regular code reviews. Collaboration & Optimization Work closely with frontend developers, product managers, and stakeholders. Conduct code reviews to maintain high-quality standards. Optimize database queries, API responses, and backend processes for maximum performance. Qualifications Education & Experience 🎓 Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) 🔹 2–4 years of backend development experience Technical Skills ✔ Proficiency in Python and Django ✔ Strong expertise in SQL (e.g., Azure SQL, PostgreSQL, MySQL) ✔ Experience with RESTful API design and development ✔ Familiarity with microservices architecture ✔ Hands-on experience with Azure services, including: • Azure App Service • Azure Functions • Azure Storage • Azure Key Vault ✔ Experience with CI/CD using Azure DevOps ✔ Proficiency with version control tools like Git ✔ Knowledge of containerization with Docker Soft Skills 🔹 Strong problem-solving skills and attention to detail 🔹 Excellent communication and teamwork abilities 🔹 Ability to thrive in a fast-paced, agile environment Preferred Skills (Nice to Have) ✔ Experience with Kubernetes (AKS) for container orchestration ✔ Knowledge of Redis for caching ✔ Experience with Celery for asynchronous task management ✔ Familiarity with GraphQL for data querying ✔ Understanding of infrastructure as code (IaC) using Terraform or Bicep What We Offer ✅ Competitive salary & benefits package ✅ Opportunity to work on cutting-edge AI-driven solutions ✅ A collaborative and inclusive work environment ✅ Professional development & growth opportunities 🚀 If you’re passionate about backend development and eager to contribute to innovative healthcare solutions, we’d love to hear from you! 🔗 Apply now and be part of our mission to transform healthcare! Show more Show less
Posted 6 hours ago
0.0 years
0 Lacs
Vijay Nagar, Indore, Madhya Pradesh
On-site
Job Title: AWS DevOps Engineer Internship Company: Inventurs Cube LLP Location: Indore, Madhya Pradesh Job Type: Full-time Internship Duration: 1 to 3 months Responsibilities: Assist in the design, implementation, and maintenance of AWS infrastructure using Infrastructure as Code (IaC) principles (e.g., CloudFormation, Terraform). Learn and apply CI/CD (Continuous Integration/Continuous Deployment) pipelines for automated software releases. Support the monitoring and logging of AWS services to ensure optimal performance and availability. Collaborate with development teams to understand application requirements and implement appropriate cloud solutions. Help troubleshoot and resolve infrastructure-related issues. Participate in security best practices implementation and review. Contribute to documentation of cloud architecture, configurations, and processes. Stay updated with the latest AWS services and DevOps trends. What We're Looking For: Currently pursuing a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Basic understanding of cloud computing concepts, preferably AWS. Familiarity with at least one scripting language (e.g., Python, Bash). Knowledge of Linux/Unix operating systems. Eagerness to learn and a strong problem-solving aptitude. Excellent communication and teamwork skills. Ability to work independently and take initiative. Bonus Points (Not Mandatory, but a Plus): Prior experience with AWS services (e.g., EC2, S3, VPC, IAM). Basic understanding of version control systems (e.g., Git). Exposure to containerization technologies (e.g., Docker, Kubernetes). Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI, AWS CodePipeline). What You'll Gain: Hands-on experience with industry-leading AWS cloud services and DevOps tools. Mentorship from experienced AWS DevOps engineers. Exposure to real-world projects and agile development methodologies. Opportunity to build a strong foundation for a career in cloud and DevOps. A dynamic and supportive work environment in Indore. Certificate of internship completion. [ Optional: Mention if there's a possibility of full-time employment after successful completion of the internship.] Job Types: Full-time, Fresher, Internship Contract length: 3 months Pay: ₹15,000.00 - ₹20,000.00 per month Schedule: Day shift Work Location: In person Speak with the employer +91 9685458368
Posted 6 hours ago
162.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – : Familiar with Cloud Engineering to leverage Cloud and DevOps based technologies provided by the Platform teams. Collaborates with the Product Manager to align technical solutions with business goals and serves as the escalation point for cloud engineering issues. Support the Product technical architecture, alignment to the technology roadmap, and technical engineering standards. Job Title - Sr Technical Lead Location: Pune Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities - This individual will assist with setting up and provisioning architecture, optimizing efforts for infrastructure, deploying best practices and excellence in automation techniques. Some great technical skillsets for this individual to possess would be the following: Azure or AWS certifications DevOps certification Scripting certification (preferably python) Previous Agile experience Experience with at least some automation tools such as ansible, puppet, Chef, Salt, and Terraform. Exp 6-9 years Show more Show less
Posted 6 hours ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai, Coimbatore
Hybrid
Hiring "Azure & devops" for Pune/Chennai/Coimbatore Locations. Overall Experience: 6- 12 yrs If you are interested in the below-mentioned position, please share your updated CV to sandhya_allam@epam.com along with the following details: Shortlisted applicants will be contacted directly. 1. Have you applied for a role in EPAM in the recent times 2. Years of Experience in Azure Cloud and DevOps Solutions 3. Years of Experience in Docker & Kubernetes 4. Years of Experience in Terraform 5. Experience in python/Bash/powershell : 6. Current Salary 7.Expected Salary 8. Notice Period (Negotiable or Mandate Responsibilities : Responsible for fault-tolerance, high-availability, scalability, and security on AZURE Infra and Platform. Responsible for implementation of CI/CD pipelines with automated build and test systems. Responsible for Production Deployment using Multiple Deployment Strategies. Responsible for Automating the AZURE Infrastructure and Platform Deployment with IAAC. Responsible for Automating System Configurations using Configuration Management Tools. Hands on Production Experience with AZURE Compute Service: VM Management, VMSS, AKS, Container Instance, Autoscaling, Load Balancers, Spot Instances, App Service,. Hands on Production Experience with AZURE Network Service: VNET, Subnets, Express Route, Azure Gateway, VPN, Load Balancer, DNS, Traffic Manager, CDN, Front Door, Private Link, Network Watcher Good Automation Skills using AZURE Orchestration Tools- Terraform, Ansible, ARM & CLI. Hands on Production experience in Docker and Container Orchestration using AKS, ACR. Ability to write scripts (Linux/shell/Python/PowerShell/Bash/CLI) to automate Cloud Automation Tasks
Posted 6 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description Job Title : Python developer with AWS Experience : 5+yrs Location : Hyderabad Notice period: Imemdiate joiners only(0-10days) Primary skills: Python developer, AWS(S3, EC2, Lambda, API) Detailed Job Description 5+ years of work experience using Python and AWS for developing enterprise software applications Experience in Apache Kafka, including topic creation, message optimization, and efficient message processing Skilled in Docker and container orchestration tools such as Amazon EKS or ECS Strong experience managing AWS components, including Lambda (Java), API Gateway, RDS, EC2, CloudWatch Experience working in an automated DevOps environment, using tools like Jenkins, SonarQube, Nexus, and Terraform for deployments Hands-on experience with Java-based web services, RESTful approaches, ORM technologies, and SQL procedures in Java. Experience with Git for code versioning and commit management Experience working in Agile teams with a strong focus on collaboration and iterative development Ability to implement changes following standard turnover procedures, with a CI/CD focus Bachelors or Masters degree in computer science, Information Systems or equivalent Skills Python Developer ,Api design,Architecture, AWS, Oops, S3, Django, fast API, Flask Show more Show less
Posted 6 hours ago
3.0 years
0 Lacs
India
Remote
About the Role At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution. Our flagship platform is currently under development. As our DevOps Engineer , you will bridge our backend systems (strategy engine, broker APIs) and frontend applications (analytics dashboards, client portals). You will own the design and execution of scalable infrastructure, CI/CD automation, and system observability in a high-frequency, multi-tenant trading environment. This role is central to deploying our containerized strategy engine (Lean-based), while ensuring data integrity, latency optimization, and cost-efficient scalability. We are a remote-first team and are open to hiring exceptional candidates globally. Key Responsibilities Design secure, scalable environments for containerized, multi-tenant API services and user-isolated strategy runners. Implement low-latency cloud infrastructure across development, staging, and production environments. Automate the CI/CD lifecycle, from pipeline design to versioned production deployment (GitHub Actions, GitLab CI, etc.). Manage Dockerized containers and orchestrate deployment with Kubernetes, ECS, or similar systems. Collaborate with backend and frontend teams to define infrastructure and deployment workflows. Optimize and monitor high-throughput data pipelines for strategy engines using tools like ClickHouse. Integrate observability stacks: Prometheus, Grafana, ELK, or Datadog for logs, metrics, and alerts. Support automated rollbacks, canary releases, and resilient deployment practices. Automate infrastructure provisioning using Terraform or Ansible (Infrastructure as Code). Ensure system security, audit readiness (SOC2, GDPR, SEBI), and comprehensive access control logging. Contribute to high-availability architecture and event-driven design for alerting and strategy signals. Technical Competencies Required Cloud: AWS (preferred), GCP, or Azure. Containerization: Proficiency with Docker and orchestration tools (Kubernetes, ECS, etc.). CI/CD: Experience with YAML-based pipelines using GitHub Actions, GitLab CI/CD, or similar tools. Data Systems: Familiarity with PostgreSQL, MongoDB, ClickHouse, or Supabase. Monitoring: Setup and scaling of observability tools like Prometheus, ELK Stack, or Datadog. Distributed Systems: Strong understanding of scalable microservices, caching, and message queues. Event-Driven Architecture: Experience with Kafka, Redis Streams, or AWS SNS/SQS (preferred). Cost Optimization: Ability to build cold-start strategy runners and enable cloud auto-scaling. 0–3 years of experience. Nice-to-Haves Experience with real-time or high-frequency trading systems. Familiarity with broker integrations and exchange APIs (e.g., Zerodha, Dhan). Understanding of IAM, role-based access control systems, and multi-region deployments. Educational background from Tier-I or Tier-II institutions with strong CS fundamentals, passion for scalable infrastructure, and a drive to build cutting-edge fintech systems. What We Offer Opportunity to shape the core DevOps and infrastructure for a next-generation fintech product. Exposure to real-time strategy execution, backtesting systems, and quantitative modeling. Competitive compensation with performance-based bonuses. Remote-friendly culture with async-first communication. Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna. Show more Show less
Posted 6 hours ago
0.0 - 5.0 years
0 Lacs
Chetput, Chennai, Tamil Nadu
On-site
Job Description: Azure Infrastructure Engineer Exp: 7+ Years CTC: 20 LPA Notice period: Immediate – 15days Base Location: Chennai (Onsite - Saudi Arabia (KSA)) Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) We are looking for an Azure Infrastructure Engineer with 3–5 years of experience who understands cloud architecture and security best practices aligned with the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM). The candidate will be responsible for designing, implementing, and managing secure and scalable infrastructure on Microsoft Azure, ensuring compliance with CSA security principles and regulatory standards. Key Responsibilities: Design and deploy Azure infrastructure with a security-first mindset, aligned with CSA CCM and Azure Well- Architected Framework. Implement identity and access controls (RBAC, Azure AD, MFA, Conditional Access) as per CSA IAM domain. Ensure data protection using Azure encryption capabilities (at-rest, in-transit, and in-use). Deploy network security architectures (NSGs, Azure Firewall, Private Link, ExpressRoute) compliant with CSA and NIST guidelines. Enable security monitoring and incident response with Azure Defender, Sentinel, and Security Center. Map and document infrastructure against CSA CCM controls. Ensure infrastructure is compliant with CIS Benchmarks, ISO 27001, and CSA STAR guidelines. Automate infrastructure provisioning with ARM templates, Bicep, or Terraform, integrating security guardrails. Perform periodic vulnerability assessments and remediation aligned with CSA guidelines. Required Skills & Qualifications: 3–5 years of experience in Azure cloud infrastructure. Strong hands-on experience in Azure IaaS (VMs, VNETs, Storage, Load Balancers, etc.). In-depth knowledge of Azure security tools (Azure Security Center, Defender for Cloud, Sentinel). Familiarity with Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) and CAIQ. Strong understanding of identity and access management principles. Proficient in scripting (PowerShell, Azure CLI) and IaC (ARM/Bicep/Terraform). Experience working in regulated industries (e.g., healthcare, finance) is a plus. Certifications (Preferred): Microsoft Certified: Azure Security Engineer Associate (AZ-500) Microsoft Certified: Azure Solutions Architect Expert CSA CCSK (Certificate of Cloud Security Knowledge) or CCSP Soft Skills: Excellent documentation and communication skills. Ability to translate compliance requirements into technical controls. Strong collaboration skills with security, operations, and compliance teams. Job Type: Full-time Pay: From ₹60,000.00 per month Schedule: Night shift Supplemental Pay: Performance bonus Ability to commute/relocate: Chetput, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 5 years (Preferred) Work Location: In person
Posted 7 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Veeam, the #1 global market leader in data resilience, believes businesses should control all their data whenever and wherever they need it. Veeam provides data resilience through data backup, data recovery, data portability, data security, and data intelligence. Based in Seattle, Veeam protects over 550,000 customers worldwide who trust Veeam to keep their businesses running. We’re looking for a Platform Engineer to join the Veeam Data Cloud. The mission of the Platform Engineering team is to provide a secure, reliable, and easy to use platform to enable our teams to build, test, deploy, and monitor the VDC product. This is an excellent opportunity for someone with cloud infrastructure and software development experience to build the world’s most successful, modern, data protection platform. Your tasks will include: Write and maintain code to automate our public cloud infrastructure, software delivery pipeline, other enablement tools, and internally consumed platform services Document system design, configurations, processes, and decisions to support our async, distributed team culture Collaborate with a team of remote engineers to build the VDC platform Work with a modern technology stack based on containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain On-call rotation for product operations Technologies we work with: Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, etc. What we expect from you: 3+ years of experience in production operations for a SaaS (Software as a Service) or cloud service provider Experience automating infrastructure through code using technologies such as Pulumi or Terraform Experience with GitHub Actions Experience with a breadth and depth of public cloud services Experience building and supporting enterprise SaaS products Understanding of the principles of operational excellence in a SaaS environment. Possessing scripting skills in languages like Bash or Python Understanding and experience implementing secure design principles in the cloud Demonstrated ability to learn new technologies quickly and implement those technologies in a pragmatic manner A strong bias toward action and direct, frequent communication A university degree in a technical field Will be an advantage: Experience with Azure Experience with high-level programming languages such as Go, Java, C/C++, etc. We offer: Family Medical Insurance Annual flexible spending allowance for health and well-being Life insurance Personal accident insurance Employee Assistance Program A comprehensive leave package, including parental leave Meal Benefit Pass Transportation Allowance Monthly Daycare Allowance Veeam Care Days – additional 24 hours for your volunteering activities Professional training and education, including courses and workshops, internal meetups, and unlimited access to our online learning platforms (Percipio, Athena, O’Reilly) and mentoring through our MentorLab program Please note: If the applicant is permanently located outside India, Veeam reserves the right to decline the application. #Hybrid Veeam Software is an equal opportunity employer and does not tolerate discrimination in any form on the basis of race, color, religion, gender, age, national origin, citizenship, disability, veteran status or any other classification protected by federal, state or local law. All your information will be kept confidential. Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice. The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes. By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice. Show more Show less
Posted 7 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karnātaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 7 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 327296 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Solution Architect to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 8+ yrs Total Experience: 8+ Years Must have GCP Solution Architect Certification& GKE Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment , business case creation , design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine , Compute Engine Managed Instance Groups , Kubernetes Cloud Storage , Cloud Storage for Firebase , Persistant Disk , Local SSD , Filestore , Transfer Service Virtual Private Network (VPC), Cloud DNS , Cloud Interconnect , Cloud VPN Gateway , Network Load Balancing , Global load balancing , Firewall rules , Cloud Armor Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. Manage Kubernetes Objects Declarative and imperative paradigms for interacting with the Kubernetes API. Managing Secrets Managing confidential settings data using Secrets. Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. Configure networking for your cluster. Hands-on experience with terraform. Ability to write reusable terraform modules. Hands-on Python and Unix shell scripting is required. understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. Experience with GCP Services and writing cloud functions. Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus Experience using Docker within container orchestration platforms such as GKE. Knowledge of setting up splunk Knowledge of Spark in GKE Certification: GCP solution architect & GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 7 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Introduction: EkVayu Tech is a fast growing, research focused, technology company specializing in developing IT and AI applications. Our projects span modern front-end development, robust backend systems, cloud-native and on-prem infrastructure, AI/ML enablement, and automated testing pipelines. We are looking for a visionary technical leader to guide our engineering team and architecture strategy as we scale. We are having products in the area of Cybersecurity/ AI/ML/DL, Signal Processing, System Engineering and Health-Tech. Job Title: Tech Architect / VP of Engineering / Tech Lead, Experience Level: Senior / Leadership Location: Noida Sector 62, UP, India Role Overview As a Tech Architect / Engineering VP / Tech Lead, you will be responsible for driving the overall engineering strategy, leading architecture and design decisions, managing development teams, and ensuring scalable, high-performance delivery of products. You’ll work closely with founders, product teams, and clients to define and deliver cutting-edge solutions that leverage AI and full-stack technologies. Key Responsibilities Architectural Leadership: o Design and evolve scalable, secure, and performant architecture across front-end, backend, and AI services. o Guide tech stack choices, frameworks, and tools aligned with business goals. o Lead cloud/on-prem infrastructure decisions, including CI/CD, containerization, and DevOps automation. Engineering Management: o Build and mentor a high-performing engineering team. o Define engineering best practices, coding standards, and technical workflows. o Own technical delivery timelines and code quality benchmarks. Hands-on Development & Technical Oversight: o Contribute to critical system components and set examples in code quality and documentation. o Oversee implementation of RESTful APIs, microservices, AI modules, and integration plugins. o Champion test-driven development and automated QA processes. AI Enablement: o Guide development of AI-enabled features, data pipelines, and model integration (working with MLOps/data teams). o Drive adoption of tools that enhance AI-assisted development and intelligent systems. Infrastructure & Deployment: o Architect hybrid environments across cloud and on-prem setups. o Optimize deployment pipelines using tools like Docker, Kubernetes, GitHub Actions, or similar. o Implement observability solutions for performance monitoring and issue resolution. Required Skills & Experience 8+ years of experience in software engineering, with 3+ years in a leadership/architect role. Strong proficiency in: o Frontend: React.js, Next.js o Backend: Python, Django, FastAPI o AI/ML Integration: Working knowledge of ML model serving, APIs, or pipelines Experience building and scaling systems in hybrid (cloud/on-prem) environments. Hands-on with CI/CD, testing automation, and modern DevOps workflows. Experience with plugin-based architectures and extensible systems. Deep understanding of security, scalability, and performance optimization. Ability to translate business needs into tech solutions and communicate across stakeholders. Preferred (Nice to Have) Experience with OpenAI API, LangChain, or custom AI tooling environments. Familiarity with infrastructure-as-code (Terraform, Ansible). Background in SaaS product development or AI-enabled platforms. Knowledge of container orchestration (Kubernetes) and microservice deployments. What We Offer Competitive compensation Opportunity to shape core technology in a fast-growing company Exposure to cutting-edge AI applications and infrastructure challenges Collaborative and open-minded team culture How to Apply Send your resume, portfolio (if applicable), and a brief note on why you’re excited to join us to HR@EkVayu.com Show more Show less
Posted 7 hours ago
0.0 - 1.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: ₹45,509.47 - ₹85,958.92 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) AI/ML: 1 year (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person
Posted 7 hours ago
8.0 - 12.0 years
0 Lacs
Delhi, India
On-site
Greetings from TCS!! TCS is hiring for Azure with Terraform role Exp: 8 to 12 years Mandatory skill: Azure , Compute, Storage, DNS, Terraform Interview mode: Face to Face Interview Date: 21 Jun 25 (saturday) Interview venue - Yamuna park - Delhi Job Description: Design and deploy scalable, highly available, and fault-tolerant systems on Azure. Proven experience with Microsoft Azure services (Compute, Storage, Networking, Security). • Strong understanding of networking concepts (DNS, VPN, VNet, NSG, Load Balancers). • Manage and monitor cloud infrastructure using Azure Monitor, Log Analytics, and other tools. • Implement and manage virtual networks, storage accounts, and Azure Active Directory. • Hands-on experience with Infrastructure as Code (IaC) tools like ARM, Terraform. Experience with scripting languages (PowerShell, Bash, or Python). • Ensure security best practices and compliance standards are followed. • Troubleshoot and resolve issues related to cloud infrastructure and services. • Experience in DevOps to support CI/CD pipelines and containerized applications (AKS, Docker). • Optimize cloud costs and performance. • Familiarity with Azure DevOps, GitHub Actions, or other CI/CD tools. • Experience in identity and access management (IAM), RBAC, and Azure AD. Please share me the updated CV with below details if your interested Overall exp: Relevant exp: Current Organisation: Highest qualification: Current CTC: Ectc: Notice period: Current location: Preferred location: Gap if any: Available for F2F discussion on 21Jun(Saturday) Y/N: Show more Show less
Posted 7 hours ago
6.0 years
0 Lacs
India
Remote
Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) — with a primary focus on Terraform — including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. Collaborate with development and security teams to resolve infrastructure issues, streamline delivery, and uphold compliance. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Nice to have Terraform (Intermediate) • AWS (IAM, Security Groups, RDS, MSK, ECS/Fargate, Cloudwatch) • Docker • CI/CD (GitLab, Jenkins) • Auth0 • Python/Bash Benefits Generous time off policies Top shelf benefits Education, wellness and lifestyle support Show more Show less
Posted 8 hours ago
40.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 8 hours ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 8 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2