Jobs
Interviews

43887 Gcp Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

indore, madhya pradesh, india

On-site

About Us Ignatiuz is a digital transformation and intelligent workplace consulting company with offices in the US (PA) and India (Indore). Focused on accelerating digital performance through innovation and automation. Our team has worked with a variety of Fortune 500 clients and has a track record of delivering reliable, high-quality solutions that drive business success. Job Description Lead the design and development of robust machine learning and deep learning architectures tailored to solve real-world business challenges. Build and optimize models using frameworks like TensorFlow, PyTorch, and Keras, ensuring high accuracy and performance. Architect and oversee end-to-end data pipelines, including data ingestion, preprocessing, and feature engineering for AI workloads. Research, prototype, and productionize novel algorithms in areas such as computer vision, NLP, and reinforcement learning. Deploy models using cloud platforms (AWS/Azure/GCP), leveraging GPU/TPU infrastructure, and integrating into CI/CD workflows using Docker and other tools. Tune hyperparameters, manage model drift, and implement monitoring to ensure sustained model performance in production. Collaborate with engineering, data science, and product teams to align AI initiatives with business goals. Ensure robust documentation, version control, and adherence to MLOps principles throughout the development lifecycle. Stay ahead of trends in AI/ML and guide the team on best practices, tools, and frameworks. Requirements Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related technical field. Experience: 4–5 years of experience in developing and deploying machine learning or deep learning solutions in real-world environments. Technical Expertise: Proficient in Python and ML libraries: NumPy, Pandas, Scikit-learn, Matplotlib, etc. Deep experience with TensorFlow, PyTorch, and Keras. Strong grasp of neural networks, CNNs, RNNs, Transformers, and GANs. Experience with GPU/TPU acceleration, model optimization, and distributed training. Familiarity with cloud platforms (AWS, Azure, or GCP) for scalable AI deployments. Hands-on with Docker, version control, and CI/CD tools for deployment automation. Experience in NLP, Computer Vision, or Reinforcement Learning projects is a strong plus. Preferred Traits: Strong problem-solving and architectural thinking Ability to work in agile, cross-functional teams Passion for innovation and applying AI to business use cases

Posted 2 days ago

Apply

8.0 years

0 Lacs

trivandrum, kerala, india

On-site

🚀 We’re Hiring: Database Administrator (DBA) 📍 Location: Thiruvananthapuram, Kerala (Onsite) 💼 Experience: 4 – 8 Years 💰 CTC Range: ₹8 – 16 LPA About the Role We are seeking a skilled Database Administrator to manage, optimize, and secure AI-driven, cloud-hosted relational and NoSQL databases. The ideal candidate will have expertise in database architecture, performance tuning, security, high availability, and cloud-based solutions, working closely with Development, DevOps, and Infrastructure teams to ensure efficient data management. Key Responsibilities Design, implement, and maintain scalable, high-availability databases. Optimize SQL queries, schemas, and indexing strategies for performance. Monitor, troubleshoot, and tune databases for efficiency. Implement replication, partitioning, and sharding techniques. Manage cloud databases (AWS RDS, Azure SQL, GCP Spanner). Enforce database security policies, roles, and access controls. Set up automated backups, recovery plans, and ensure compliance (GDPR, HIPAA, ISO). Support CI/CD pipelines for automated database deployments. Perform installation, upgrades, and patching of database systems. Required Skills 4+ years of DBA experience. Strong knowledge of MySQL, PostgreSQL, SQL Server, Oracle, and MongoDB. Expertise in query optimization, indexing, clustering, and replication. Familiarity with AI-powered analytics & predictive data processing. Experience with data pipelines and streaming tech (Kafka, Snowflake, BigQuery). Proficiency in scripting & automation (Shell, Python, PowerShell). Strong awareness of database security & compliance standards. Preferred Qualifications Bachelor’s in Computer Science / IT (or equivalent). Relevant certifications: Oracle DBA, AWS Database Specialty, MS SQL Server. Exposure to NoSQL systems (Redis, Cassandra, DynamoDB). Familiarity with Big Data tools (Hadoop, Spark, Elasticsearch).

Posted 2 days ago

Apply

5.0 years

0 Lacs

surat, gujarat, india

On-site

📍 Surat, India | 🕑 2–5 Years Experience About Us Membroz, a flagship SaaS product by Krtya Technologies Pvt Ltd , helps clubs, resorts, and fitness businesses manage memberships, bookings, and customer engagement. We are expanding our development team to build next-gen features and integrations. Responsibilities Design and develop scalable SaaS applications using Next.js, NestJS, React.js, and MongoDB Build and maintain APIs and reusable components Collaborate with product managers, designers, and QA to deliver new features Ensure application performance, security, and scalability Troubleshoot and fix issues across the stack Requirements 2–5 years of hands-on experience with JavaScript/TypeScript Strong proficiency in Next.js, NestJS, React.js, and MongoDB Experience building RESTful APIs and integrating third-party services Familiarity with Git, CI/CD workflows , and cloud environments (AWS/GCP/Azure is a plus) Strong problem-solving mindset and ability to work in a collaborative team What We Offer Competitive salary and growth opportunities Opportunity to work on a fast-growing SaaS platform (Membroz) with global clients Supportive and innovative work culture

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

mumbai metropolitan region

On-site

Role:L3 Support Engineer & Network Support Engineer Experience-8-12years Notice period-Immediate to 30Days Location -Mumbai Email on vaishnavi.yelgulwar@aptita.com Roles and Responsibilites:- We are looking for an experienced L3 IT Support and Network Support Engineer to join our IT team. The ideal candidate will be responsible for providing the high-level support for complex hardware, software, and network issues while playing a critical role in infrastructure management, security enforcement, and project implementation. L3 IT Support Handle escalations from L1 and L2 teams related to systems and networks. Provide advanced troubleshooting and root cause analysis for OS, application, and hardware-related issues. Maintain and support enterprise tools such as Active Directory, Microsoft Exchange, Office 365, SCCM, and endpoint protection systems. Manage IT asset inventory, software licensing, patch management, and vulnerability remediation. Implement and maintain backup, disaster recovery, and high availability systems. Document all resolutions, procedures, and configurations in the knowledge base. Network Support Configure, monitor, and maintain routers, switches, firewalls, access points, and VPNs. Troubleshoot LAN/WAN connectivity, DNS/DHCP issues, and network performance problems. Monitor and ensure uptime, availability, and performance of network infrastructure. Support and manage firewalls (Cisco ASA/Fortinet/SonicWall/Palo Alto), VPN, VLANs, and IPsec tunnels. Implement network security protocols and support audit/compliance processes. Collaborate with vendors for network upgrades, maintenance, and incident resolution. Desired Skillsets 8–12 years of hands-on experience in IT infrastructure support and network operations. Strong knowledge of Windows Server, Active Directory, Group Policy, and DNS/DHCP. Experience with cloud services (Azure, AWS, or GCP), virtualization (VMware/Hyper-V). Solid understanding of networking concepts: TCP/IP, BGP, OSPF, NAT, VLANs, etc. Hands-on experience with enterprise-grade routers/switches/firewalls. Microsoft Certified: Azure Administrator Associate / MCSA / MCSE Cisco Certified Network Professional (CCNP) / CCNA We are looking for an experienced L3 IT Support and Network Support Engineer to join our IT team.

Posted 2 days ago

Apply

6.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Greeting from Getronics! We have multiple opportunities for Full Stack JAVA Developers for our automotive client in Chennai Location. About the Company Getronics is the leading member in the Global Workspace Alliance, a unique model that provides customers with a consistent IT service throughout the world, with one single point of contact and billing entity, delivering services to 100 countries. About the Role JAVA Full Stack developer. Serve as a core member of the secure coding product team that enables the design, development, and creation of secure coding practices. Experience Required 6+ years of Overall experience with 4+ years as Core JAVA Full Stack developer. Skills Required Software Engineer - Developer. Java, Spring Boot, Microservices, (Angular or React) for front-end development. Web Services, Microsoft SQL Server, NoSQL database and flat file processing concepts. REST APIs, and related architectures. Any of the cloud experience is required (GCP (preferred)/AWS/Azure/PCF). Responsibilities Develop application software and RESTful services and Spring Framework. Experience building distributed, service oriented, cloud micro service-based architectures. Use of Test-Driven Development and code pairing/mobbing practices. Develop components across all tiers of the application stack. Continuously integrates and deploys developed software. Consults with product manager to identify minimal viable product and decomposes features by story slicing. Collaborate with other product teams on integrations, testing, and deployments. Qualifications Any Bachelor's degree (preferably Engineering Graduate). Additional Information Willing to work in Hybrid mode (4 days per week) in Chennai - Client location. Looking for Immediate to 45 days' notice candidates only. Candidate should be willing to attend Hacker Rank coding assessment (1-hour online video coding) as 1st level of interview. Interested candidates, please share your resume to abirami.rsk@getronics.com.

Posted 2 days ago

Apply

4.0 years

0 Lacs

mumbai metropolitan region

On-site

Sia is a next-generation, global management consulting group. Founded in 1999, we were born digital. Today our strategy and management capabilities are augmented by data science, enhanced by creativity and driven by responsibility. We’re optimists for change and we help clients initiate, navigate and benefit from transformation. We believe optimism is a force multiplier, helping clients to mitigate downside and maximize opportunity. With expertise across a broad range of sectors and services, our consultants serve clients worldwide. Our expertise delivers results. Our optimism transforms outcomes. Heka.ai is the independent brand of Sia Partners dedicated to AI solutions. We host many AI-powered SaaS solutions that can be combined with consulting services or used independently, to provide our customers with solutions at scale. Job Description Sia is looking for a DevOps Engineer to support the development of the Data Science Business Line in our new office in Mumbai. Your main responsibilities will include: Participation in our consulting assignments with our clients Development of our Software-as-a-Service products from Heka.ai Support to Data Scientists, Data Engineers and Software Engineers in projects with a strong Data component: Cloud services: architecture, storage & computing services, costs monitoring & optimization, access level management Containers: containerization & orchestration of applications, K8 cluster management adapted to data workloads Python programming: development of tools executed on the server (automation scripts, microservices, etc.) CI/CD: integration and continuous deployment of applications Contribution to technological, architectural and governance choices to address the challenges of scaling Data projects Qualifications Engineering background with 4+ years of relevant experience in DevOps Engineering Mastery of Python, for back-end projects – another programming language is a plus Mastery of a containerization solution such as Docker, and at least one container orchestration tool Experience with the services of a cloud provider (AWS, GCP or Azure) Knowledge of Infrastructure-as-Code tools as Terraform is a plus Mastery of CI/CD techniques and tools (pipeline automation, Docker, Kubernetes...) Fluency in English (written + oral) Additional Information Sia is an equal opportunity employer. All aspects of employment, including hiring, promotion, remuneration, or discipline, are based solely on performance, competence, conduct, or business needs.

Posted 2 days ago

Apply

40.0 years

0 Lacs

greater kolkata area

Remote

Who We Are Escalent is an award-winning data analytics and advisory firm that helps clients understand human and market behaviors to navigate disruption. As catalysts of progress for more than 40 years, our strategies guide the world’s leading brands. We accelerate growth by creating a seamless flow between primary, secondary, syndicated, and internal business data, providing consulting and advisory services from insights through implementation. Based on a profound understanding of what drives human beings and markets, we identify actions that build brands, enhance customer experiences, inspire product innovation and boost business productivity. We listen, learn, question, discover, innovate, and deliver—for each other and our clients—to make the world work better for people. Why Escalent? Once you join our team you will have the opportunity to... Access experts across industries for maximum learning opportunities including Weekly Knowledge Sharing Sessions, LinkedIn Learning, and more. Gain exposure to a rich variety of research techniques from knowledgeable professionals. Enjoy a remote first/hybrid work environment with a flexible schedule. Obtain insights into the needs and challenges of your clients—to learn how the world’s leading brands use research. Experience peace of mind working for a company with a commitment to conducting research ethically. Build lasting relationships with fun colleagues in a culture that values each person. Role Overview: We are looking for a Python Application Developer to join our dynamic team in India and contribute to the design, development, and deployment of high-performance applications. The ideal candidate will have a strong background in Python development, experience with API integration, data processing, and scalable application architectures, and a passion for solving complex problems with innovative solutions. Roles & Responsibilities: Develop, test, and maintain scalable Python applications in a cloud-native environment Design and implement RESTful APIs and microservices to integrate with external systems Optimize code for performance, scalability, and maintainability Collaborate with cross-functional teams to define, design, and ship new features Work with databases (SQL and NoSQL) to manage and process structured and unstructured data Implement best practices for security, testing, and CI/CD automation Debug and resolve application issues, ensuring high system availability and reliability Write clean and structured code as defined in the team’s coding standards and creating documentation for best practices Stay updated with emerging technologies, frameworks, and industry trends Required Skills: Minimum 7 years of experience with Python and frameworks like Django, Flask, or FastAPI Experience with API development, integration, and web services Strong knowledge of database systems (PostgreSQL, MySQL, MongoDB, or similar) Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) Understanding of asynchronous programming, multithreading, and performance optimization Experience with unit testing, debugging, and version control (Git) Knowledge of data processing, machine learning, or automation frameworks is a plus. Ability to work in a team setting Organizational and time management skills Desirable skills: Experience working with Agile development methodologies Familiarity with DevOps tools (CI/CD pipelines, Terraform, Kubernetes) Effective skills in written and verbal communication

Posted 2 days ago

Apply

10.0 years

0 Lacs

gurugram, haryana, india

On-site

Dunnhumby helping retailers and brands deliver better experiences through Customer First strategies. We are seeking a talented Engineering Manager to lead a team of engineers in developing product that help Retailers transform their Retail Media business in a way that helps them achieve maximum ad revenue and enable massive scale. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. As an Engineering Manager , you will play a pivotal role in designing and delivering high-quality software solutions. You will be responsible for leading a team, mentoring engineers, contributing to system architecture, and ensuring adherence to engineering best practices. Your technical expertise, leadership skills, and ability to drive results will be key to the success of our products. What you will be doing Lead and manage a team of software engineers, fostering growth and development. Collaborate with product managers and architects to define technical roadmaps. Oversee the design, development, and delivery of .NET Core/JAVA based solutions using microservices based architecture. Implement and enforce best practices in coding, CI/CD pipelines, and DevOps. Conduct regular performance reviews and provide actionable feedback. Maintain a balance between delivery timelines and technical excellence. Ensure adherence to compliance, security, and quality standards. Drive innovation and continuous improvement initiatives. Improve developer productivity and enhancing engineering processes. Coach, train, and encourage the junior teammates. Required Skillsets: Experience 10+years Strong technical expertise in C#, .NET Core OR JAVA and related frameworks. Experience in Java open to work on dot net will be considered Deep understanding of design patterns, solid principles, system design and Engineering best practices. Deep knowledge of Relational Database (SQL ,PostgreSQL). Knowledge of Big query, Reddis and Elastic Search is a plus. Familiarity with Microservices architecture, Event-Driven Architecture (RabbitMQ, Google Pub-Sub) and containerization (Docker, Kubernetes). Experience with DevOps tools(Gitlab CI/CD) and practices. Proven experience with cloud platforms such as GCP (Preferred) or Azure. Deep understanding of distributed systems, multi-tenant services, cloud-native applications and Unix/Linux environments. Knowledge of Testing Pyramid approach with relevant tools knowledge at each level. Knowledge of Code Quality and Secure code scan tools like SonarQube, Checkmarx, ESLint, Resharper and Trivy. Solid understanding of software development lifecycle (SDLC) and Agile methodologies. Strong interpersonal and leadership skills with the ability to motivate a team. Experience on product background or projects Hands-on experience with Observability tools like NewRelic. Knowledge of AdTech or Retail domain is a plus Automation tools is an added advantage.

Posted 2 days ago

Apply

4.0 - 7.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Title: Developer – Java/Microservices Location: Noida, India Experience: 4 to 7 years Employment Type: Full-time Education BE/BTECH/MCA/BCA/BSC CentumT&S India is a Business unit of Centum Electronics Group offers a wide range of electronic and embedded systems design engineering services to Global Customers to help them realize complex products and sub systems. CentumT&S India (CAI) is an Electronics Design Center of Excellence, designing for mission critical projects in Aerospace/Space, Transportation, Medical Electronics, Defense Electronics etc. It has other design centers in the France, USA, Canada & Germany. About the Role: Centum is looking for a talented Developer to join our Electronic SIM project in the Aerospace domain . You will work on building and optimizing enterprise-grade solutions using modern technologies and cloud-native architectures. Key Responsibilities: Develop and maintain robust applications using Core Java & Microservices . Collaborate with architects and leads to design scalable solutions. Implement and optimize services in Docker/Kubernetes environments . Work with cross-functional teams to deliver high-quality software. Write clean, maintainable, and testable code. Must-Have Skills: Strong hands-on experience in Core Java & Microservices . Experience on GCP/AWS will be preferred Proficiency in Docker, Kubernetes, NoSQL databases . Understanding of distributed systems and modern software practices. Good-to-Have Skills: Exposure to GCP/AWS . Knowledge of CI/CD pipelines . Why Join Us? Opportunity to work on innovative projects in the aerospace domain . Collaborative culture with focus on growth and learning. Work with cutting-edge technologies in cloud-native environments. send your CV directly to ranjithn@centumtns.com

Posted 2 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: PDI is seeking a talented and motivated Full Time Data Engineer III to join our elite agile data services team responsible for developing and maintaining our industry-leading cloud-based big data and data analytics infrastructure serving major global fortune 500 companies. The ideal candidate will have hands-on experience in coding data pipelines, administering databases, and working with business users to understand and meet their data requirements. This role involves maintaining high performance and security of our data systems, performing quality assurance, and supporting the company’s data infrastructure, primarily using AWS, Snowflake and DBT. Key Responsibilities: Design and manage complex data architectures Lead the development and optimization of data pipelines and ETL processes Mentor junior engineers and provide technical guidance Collaborate with cross-functional teams to understand and meet data requirements Ensure the reliability and performance of data systems Conduct data validation and quality assurance Document data workflows and technical specifications Participate in agile development processes Implement industry standards and best practices Maintain data security and compliance Provide on-call support as required Estimate and plan data engineering projects Develop strategies for data storage, processing, and archiving Troubleshoot and resolve complex data issues Qualifications: Advanced SQL skills and proficiency in multiple programming languages Extensive experience with data warehousing, specifically Snowflake Proficiency in DBT (Data Build Tool) Extensive experience in Cloud, such as AWS, GCP or Azure Strong problem-solving and project management skills Excellent communication and leadership abilities Bachelor’s or master’s degree in computer science, Information Technology, or a related field Preferred Qualifications: Certifications such as Snowflake SnowPro Core Certification, dbt Certification, AWS Certified Data Analytics are a plus Behavioral Competencies: Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 2 days ago

Apply

7.0 - 10.0 years

0 Lacs

hyderabad, telangana, india

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

india

Remote

About the Role We are seeking a highly skilled Data Engineer with strong Snowflake experience to join our team. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and solutions on the Snowflake Data Cloud platform. You will collaborate with cross-functional teams to ensure high data quality, availability, and performance for analytics and business insights. Key Responsibilities Design, develop, and maintain data pipelines and ETL/ELT workflows using Snowflake. Optimize Snowflake databases, schemas, and queries for performance and cost efficiency . Integrate data from multiple sources (structured, semi-structured, unstructured) into Snowflake. Implement best practices for data modeling, security, governance, and monitoring . Required Skills & Experience 6-10 years of experience as a Data Engineer (or similar role). Strong hands-on experience with Snowflake (data modeling, query tuning, warehouse optimization). Familiarity with cloud platforms (AWS, Azure, or GCP) and services like S3, Data Lake, etc. Strong problem-solving and analytical skills with attention to detail.

Posted 2 days ago

Apply

7.0 years

0 Lacs

india

On-site

We are seeking a highly experienced Senior Database Administrator (DBA) to manage, optimize, and safeguard mission-critical databases across production and development environments. The ideal candidate will bring deep expertise in database performance, scalability, and high availability, while ensuring security, compliance, and seamless data operations. Key Responsibilities Administer, maintain, and optimize databases ( Oracle, MySQL, PostgreSQL, SQL Server, or NoSQL systems ). Ensure database availability, reliability, performance, and scalability for production and enterprise systems. Design and implement backup, recovery, disaster recovery (DR), and high availability (HA) strategies . Monitor and troubleshoot performance bottlenecks; apply tuning techniques for queries, indexes, and overall system performance. Plan and execute database upgrades, migrations, and patching with minimal downtime. Collaborate with application developers to design efficient schemas, queries, and data access strategies. Implement and enforce security standards, access controls, and compliance requirements (GDPR, HIPAA, SOX, etc.). Automate routine DBA tasks using scripting (Shell, Python, PowerShell, etc.). Provide mentorship and guidance to junior DBAs and cross-functional teams. Stay updated with emerging database technologies, cloud-native solutions, and industry best practices. Required Skills & Qualifications Bachelor’s/Master’s degree in Computer Science, Information Systems, or related field. 7+ years of experience as a DBA, with expertise in at least one major RDBMS ( Oracle, MySQL, PostgreSQL, SQL Server ) and exposure to NoSQL databases (MongoDB, Cassandra, etc.) . Proven experience with performance tuning, replication, partitioning, sharding, and clustering . Hands-on knowledge of cloud databases (AWS RDS/Aurora, Azure SQL, GCP Cloud SQL/Spanner) . Strong skills in backup/recovery tools, monitoring platforms, and automation frameworks . Proficiency in SQL and scripting languages for automation and troubleshooting. Deep understanding of data security, encryption, and compliance frameworks . Excellent problem-solving and communication skills; ability to manage complex projects independently.

Posted 2 days ago

Apply

8.0 years

0 Lacs

india

Remote

Job Title: Python Developer Experience: 8+ Years Location: Remote Employment Type: Full-Time Key Responsibilities Problem Solving: Debug and troubleshoot complex issues efficiently. Log Analysis: Search, analyze, and interpret logs to identify and resolve issues. Cloud Knowledge: Apply foundational understanding of Google Cloud Platform (GCP) services and cloud concepts. Team Collaboration: Communicate effectively and work seamlessly in a team environment. Continuous Learning: Stay updated with emerging technologies and adapt quickly to new tools and methodologies. Required Qualifications Education: Bachelor’s degree in Computer Science, Computer Engineering, Information Systems, or equivalent work experience (including open-source web services development). Python Expertise: 8+ years of Python API/RESTful service development using FastAPI or Django . Database Skills: 5+ years working with relational databases like PostgreSQL and MySQL . Web Scraping: Proficient with Selenium and BeautifulSoup for data extraction. Testing: Strong experience writing unit and functional tests using PyTest .

Posted 2 days ago

Apply

0.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Job Title: Technical Lead – React, Python, Node.js/Express.js, Cloud & DevOps Location: Pune (Onsite) Experience Required: 7+ years Joining: Immediate Key Responsibilities: Lead the design, development, and delivery of full-stack applications using React, Python, and Node.js/Express.js . Drive architecture decisions, best practices, and code quality across the team. Manage and mentor a team of developers, ensuring timely delivery of high-quality solutions. Build and deploy scalable microservices and APIs. Implement and manage CI/CD pipelines and cloud-native deployments. Collaborate with stakeholders, product managers, and cross-functional teams for project success. Ensure application performance, security, and reliability in production environments. Required Skills: Strong hands-on expertise in React.js (front-end). Proficiency in Python (Django/Flask/FastAPI) and Node.js/Express.js (back-end). Solid understanding of REST APIs, GraphQL , and microservices architecture. Experience with Cloud Platforms (AWS, Azure, or GCP) . Strong knowledge of DevOps practices : Docker, Kubernetes, Jenkins/GitLab CI/CD. Proficiency in version control (Git/GitHub/GitLab) and Agile/Scrum methodologies . Knowledge of databases : SQL (PostgreSQL/MySQL) and NoSQL (MongoDB, Redis). Hands-on with monitoring and logging tools : Prometheus, Grafana, ELK/EFK. Strong leadership, problem-solving, and communication skills. Good to Have: Experience with Serverless (AWS Lambda, Azure Functions, GCP Cloud Functions) . Exposure to IaC tools : Terraform, Ansible. Familiarity with testing frameworks (Jest, PyTest, Mocha/Chai). Knowledge of security best practices in cloud and application development. What We Offer: Competitive compensation Opportunity to lead a high-performing engineering team . Exposure to modern tech stack and large-scale projects. A collaborative and growth-oriented work culture. How to Apply: Interested and suitable candidates are invited to apply with updated resume. Job Types: Full-time, Permanent Pay: Up to ₹2,500,000.00 per year Benefits: Commuter assistance Flexible schedule Health insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have hands-on experience in React, Python, Node.js/Express.js, Cloud & DevOps? (Yes/No) What is your current CTC? What is your expected CTC? What is your official notice period/last working day? Experience: Tech Lead: 7 years (Required) Work Location: In person

Posted 2 days ago

Apply

6.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Role: AI/ML Engineer Job Location: New Delhi / Hyderabad, TS / Gurgaon, HR / Chennai, TN / Bangalore, KA / Noida, UP Work Mode: (Hybrid/WFO) Skills Required: Artificial Intelligence (AI)- Gen AI ML, Python, AWS, LLM Experience: 6-8 Years Job Description: Design, development, and deployment of AI/ML models and solutions; Optimize and fine-tune AI/ML models for performance and scalability Utilize solid development skills in programming languages such as Python, SQL, PySpark, Hive, Shell Scripting Ensure best practices in coding, testing, and deployment Stay updated with the latest advancements in AI/ML techniques and tools, and identify opportunities to apply them to enhance existing solutions Work on the full life cycle of AI/ML projects, including data preparation, model development, tuning, and deployment Manage multiple AI/ML projects simultaneously, ensuring timely delivery and quality Collaborate with cross-functional teams to define project requirements and deliverables Communicate effectively with stakeholders to understand their needs and provide technical insights Conduct research to identify new opportunities for AI/ML applications Provides explanations and interpretations within area of expertise Produce and distribute scheduled and ad-hoc reports on product development and performance. Essential Skills: Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics or a related field Experience: 4+ years of experience in AI/ML Hands-on experience in delivering production-grade AI/ML projects Experience with deploying AI/ML models in production environments using cloud platforms like AWS, Azure, or GCP Solid background in programming languages such as Python, SQL, PySpark, Hive, Shell Scripting. Experience with cloud platforms AI Stacks like AWS Bedrock, Azure AI, GCP Vertex AI is a plus Proven excellent problem-solving and analytical skills Proficiency in frameworks such as Scikit Learn, TensorFlow, and PyTorch Solid programming skills in Python, with experience in pandas, NumPy, and SQL Ability to work in a fast-paced, dynamic environment Robust skills in EDA, building and maintaining AI/ML data pipelines Solid communication and interpersonal skills

Posted 2 days ago

Apply

0 years

0 Lacs

trivandrum, kerala, india

On-site

#Urgently Hiring# 🤝 Data Architect Experience - 12 to 17 Yrs Work Location - Kochi/ Trivandrum Job Type - Full Time , Permanent Mandatory Skills - Azure Services (Primary) and AWS/GCP,ADF Interested Candidates Please send their Updated Resume to : tobin.philips@greenbayit.com Mob No:- 8943011444

Posted 2 days ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

Position: GCP Data Engineer Location: Pune Duration: Contract to Hire Job Description: GCP Python PLSQL

Posted 2 days ago

Apply

5.0 years

0 Lacs

pune, maharashtra, india

On-site

About Client : Our is Client is a largest Top 5 Software giant in India, with over 11.3 USD billion dollars revenue, Global work force 2,40,000 employees, It delivers end-to-end technology, consulting, and business process services to clients across the globe, Presence: 60+ countries and Publicly traded company NSE & BSE (India), NYSE (USA). Job Title: Technical Program Manager Exp: 2+ to 5 years Location: Pune Salary: As per market Notice Period: 0-15/serving Technical Program Manager I/II : JD Program Management Team/Bioinformatics BU (Full Time Employment) Excelra is looking for highly skilled and motivated members for the Program Management Team with technical experience across multiple domains (AI/ML,NGS/human genetics/statistics, software development, project management, agile methodology) in bioinformatics to join the Bioinformatics team. The successful candidate will be expected to demonstrate the following competencies: 1.Project management as well as program management across different projects in key client accounts. 2.Liaison the relationship between client and Excelra 3.Contribute to efforts to increase operational efficiency among projects. 4.Collaboration with delivery for successful delivery of projects 5.Interact with the technical stakeholders of clients to identify new business opportunities. Qualifications: • PhD with 2-3 years of industrial experience or M.Sc/M.Tech with 5+ years’ experience in Bioinformatics, Computer Science, Bioengineering, Computational Biology, or related field. • Strong track record of omics (NGS) data analysis, scRNA-Seq analysis or ML/AI and downstream systematic interpretation are must. • Expertise in one or more programming/scripting languages such as R, Python, shell-script for complex data analysis. • Experience in image analysis, knowledge graphs, deep learning and/or LLM’s is an advantage. • Some experience in data engineering techniques, full-stack development is expected. • Understanding of current best practices in computational biology data-management (NGS/Transcriptiomics/Microarray/Proteomics/Clinical-trials/Text-mining, etc.) is an added advantage. • Experience with Docker, Linux environment or cloud computing (AWS/GCP) is a plus. • Experience with Relational database such as Postgres, MySQL, or Oracle, is a plus. • Experience with version control systems such as GitHub, is a plus. • Ability to generate in-silico experimental workflows and in-depth knowledge in proprietary and public biological databases, methods, and tools. • Ability to generate scientific hypothesis along with good scientific/technical communication skills would be preferred

Posted 2 days ago

Apply

5.0 years

0 Lacs

pune, maharashtra, india

On-site

Job Title: Gen AI Engineer Location: Pune Required Skills: 5+ years of experience building machine learning models and systems 1+ years working with LLMs and Generative AI (prompt engineering, RAG, agents) Strong programming skills in Python, LangChain/LangGraph, and SQL Hands-on experience with cloud platforms (Azure, GCP, AWS) Ability to guide teams on technical roadmaps and AI best practices Excellent communication skills to collaborate with product and business stakeholders Roles & Responsibilities: Design, fine-tune, and deploy LLM-based solutions using techniques like prompt engineering, RAG, and agents Own and maintain scalable, reusable, and high-performance codebases Deploy and optimize GenAI applications on cloud environments with CI/CD best practices Collaborate with SMEs, product owners, and data teams to align business and technical goals Mentor engineers on ML/LLM development practices Stay updated on GenAI advancements to drive continuous innovation

Posted 2 days ago

Apply

7.0 - 10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 2 days ago

Apply

7.0 - 10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 2 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview: PDI is seeking a talented and motivated Full Time Data Engineer III to join our elite agile data services team responsible for developing and maintaining our industry-leading cloud-based big data and data analytics infrastructure serving major global fortune 500 companies. The ideal candidate will have hands-on experience in coding data pipelines, administering databases, and working with business users to understand and meet their data requirements. This role involves maintaining high performance and security of our data systems, performing quality assurance, and supporting the company’s data infrastructure, primarily using AWS, Snowflake and DBT. Key Responsibilities: Design and manage complex data architectures Lead the development and optimization of data pipelines and ETL processes Mentor junior engineers and provide technical guidance Collaborate with cross-functional teams to understand and meet data requirements Ensure the reliability and performance of data systems Conduct data validation and quality assurance Document data workflows and technical specifications Participate in agile development processes Implement industry standards and best practices Maintain data security and compliance Provide on-call support as required Estimate and plan data engineering projects Develop strategies for data storage, processing, and archiving Troubleshoot and resolve complex data issues Qualifications: Advanced SQL skills and proficiency in multiple programming languages Extensive experience with data warehousing, specifically Snowflake Proficiency in DBT (Data Build Tool) Extensive experience in Cloud, such as AWS, GCP or Azure Strong problem-solving and project management skills Excellent communication and leadership abilities Bachelor’s or master’s degree in computer science, Information Technology, or a related field Preferred Qualifications: Certifications such as Snowflake SnowPro Core Certification, dbt Certification, AWS Certified Data Analytics are a plus Behavioral Competencies: Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 2 days ago

Apply

12.0 years

0 Lacs

thane, maharashtra, india

On-site

About the organization: DMart is one of India’s leading retail chains, serving millions of customers across 425+ stores and e-commerce channels throughout India. Our core objective is to offer customers good products at great value. We focus on everyday low pricing, seamless shopping experiences, and data-driven decision making. As we scale our data analytics journey, we’re seeking a Data Engineering Lead to drive our next phase of growth. Key Responsibilities: Technical Leadership: Lead architecture design for multiple enterprise data initiatives across cloud, on-premises, and hybrid platforms. Data Platform Architecture and Implementation: Define and document multi-layered architectures (Raw, Curated, Analytics) across data warehouses, lake houses, and operational data stores, with an emphasis on cloud-based solutions. Production Management: Directly run daily processes for efficient and reliable delivery of all programs end to end for data sync of the data lake – including oversight of change management for enrichment as per CRs. Platform Expertise: Apply deep knowledge of architecture patterns and principles, performance tuning, and best practices where Snowflake is part of the solution stack. Schema & Data Modelling: Develop conceptual, logical, and physical data models using consistent standards, including dimensional, normalized, and denormalized structures for analytical and operational use cases. Best Practices & Optimization: Provide guidance on data partitioning, indexing, access controls, naming conventions, and performance optimization across multiple platforms. Metadata & Data Lineage: Collaborate with governance and stewardship teams to integrate metadata management, lineage mapping, and catalog tools (Snowflake Catalog or equivalents). Collaboration: Work closely with analysts, governance leads, engineering teams, and application architects to align models with domain requirements and enterprise architecture standards. Documentation: Maintain architecture blueprints and technical diagrams using tools such as Draw.io or equivalent. Mentorship: Guide and mentor engineers, modelers, and other technical resources on architectural best practices. Requisites: Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field. Experience: 12+ years of IT experience with a strong background in data engineering, modeling, and architecture. 5+ years of experience designing and implementing enterprise data platforms (cloud/on-prem), including at least 3 years with Snowflake. Proficiency in SQL, schema design, and data modeling tools. Certifications in Snowflake incl data modelling, data extracts/ingestion will be given preference. Strong knowledge of ELT/ETL patterns, data integration, and orchestration tools (e.g., Airflow). Integration Experience with ETL and ELT tools and applications incl SAP ABAP Experience with streaming data platforms and OData. Experience with CDC patterns. Experience designing data layers (Raw, Curated, Analytics) with governance and scalability in mind. Familiarity with security and compliance frameworks, including RBAC and regulatory requirements. Proven track record in project delivery, including performance optimization and handling large-scale dataset. Experience with SAP R3 or SAP BW and Google BigQuery. Certifications in cloud platforms or enterprise architecture (e.g., TOGAF, DAMA, SnowPro, Azure, GCP). Experience integrating data platforms with BI tools (e.g., Power BI, Tableau). Knowledge of MDM, data mesh concepts, and even

Posted 2 days ago

Apply

7.0 - 10.0 years

0 Lacs

hyderabad, telangana, india

On-site

At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Do you love creating solutions that unlock developer productivity and bring teams together? Do you insist on the highest standards for the software your team develops? Are you an advocate of fast release cycle times, continuous delivery and measurable quality? If this is you, then join an energetic team of DevOps Engineers building next generation development applications for PDI! As a DevOps Engineer, you will partner with a team of senior engineers in the design, development and maintenance of our CI/CD DevOps platform for new and existing PDI solutions. The platform will be used internally by the engineering teams, providing them an internal pipeline to work with POCs, alpha, betas and release candidate environments, as well as supporting the pipeline into our production stage and release environments managed by our CloudOps Engineers and running hybrid clouds composed of PDI datacenter based private cloud clusters federated with public cloud-based clusters. You will play a key role in designing & building our CI/CD delivery pipeline as we drive to continuously increase our cloud maturity. You will be supporting automated deployment mechanisms, writing hybrid cloud infrastructure as code, automated testing, source control integration and lab environment management. You will review, recommend & implement system enhancements in the form of new processes or tools that improve the effectiveness of our SDLC while ensuring secure development practices are followed and measured. You will be responsible for maintaining order in the DevOps environment by ensuring all stakeholders (testers, developers, architects, product owners, CloudOps, IT Ops…) are trained in operating procedures and best practices. With the variety of environments, platforms, technologies & languages, you must be comfortable working in both Windows & Linux environments, including PowerShell & bash scripting, database administration as well as bare metal virtualization technologies and public cloud environments ( AWS ). Key Responsibilities Support pre-production services : Engage in system design consulting, develop software platforms and frameworks, conduct capacity planning, and lead launch reviews to ensure smooth deployment and operational readiness before services go live. Scale and evolve systems : Ensure sustainable system scaling through automation, continuously pushing for improvements in system architecture, reliability, and deployment velocity Champion Infrastructure-as-Code (IaC) practices to ensure scalability, repeatability, and consistency across environments. Drive the selection and implementation of portable provisioning and automation tools (e.g., Terraform, Packer) to enhance infrastructure flexibility and efficiency. Evangelize across teams: Work closely with development and QA teams to ensure smooth and reliable operations, promoting a culture of collaboration in addition to DevOps best practices. Optimize CI/CD pipelines : Lead the development, optimization, and maintenance of CI/CD pipelines to enable seamless code deployment, reduce manual processes, and ensure high-quality releases. Enhance observability and monitoring : Implement comprehensive monitoring, logging, and alerting solutions, using metrics to drive reliability and performance improvements across production systems. Administer and optimize DevOps tools (e.g., Jenkins, Jira, Confluence, Bitbucket), providing user support as needed and focusing on automation to reduce manual interventions. Mentor and guide team members : Provide technical leadership and mentorship to junior DevOps engineers, fostering continuous learning and knowledge sharing within the team Qualifications 7-10 years in DevOps or related software engineering, or equivalent combination of education and experience Proven expertise in AWS cloud services. Experience with other cloud platforms (Azure, GCP) is a plus. Advanced proficiency in Infrastructure as Code (IaC) using Terraform , with experience managing complex, multi-module setups for provisioning infrastructure across environments. Strong experience with configuration management tools, particularly Ansible (preferred), and/or Chef, for automating system and application configurations. Expertise in implementing CI/CD best practices ( Jenkins , Circle CI , TeamCity , or Gitlab ) Experience with version control systems (e.g., Git, Bitbucket), and developing branching strategies for large-scale, multi-team projects. Familiar with containerization ( Docker ) and cloud orchestration ( Kubernetes , ECS , EKS , Helm ) Functional understanding of various logging and observability tools ( Grafana , Loki , Fluentbit , Prometheus , ELK stack , Dynatrace , etc.) Familiar with build automation in Windows and Linux and familiar with the various build tools ( MSBuild , Make ), package managers ( NuGet , NPM , Maven ) and artifact repositories ( Artifactory , Nexus ) Working experience in Windows and Linux systems, CLI and scripting Programming experience with one or more of Python, Groovy, Go , C# , Ruby, PowerShell Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Excellent problem-solving and troubleshooting skills, with the ability to diagnose complex system issues and design effective solutions. Strong communication and collaboration skills, with experience mentoring team members and working closely with development, operations, and security teams. Preferred Qualifications Domain experience in the Convenience Retail Industry, ERP, Logistics or Financial transaction processing solutions Any relevant certifications are a plus Any other experience with common Cloud Operations/DevOps tools and practices is a plus Behavioral Competencies : Cultivates Innovation Decision Quality Manages Complexity Drives Results Business Insight PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies