Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
0 Lacs
gurgaon, haryana, india
On-site
Job Description At American Express, our culture is built on a 175-year history of innovation, shared and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Data Technology (Mar Tech Data Team) within American Express Technologies. This team is specialized in creating and expanding suite of data and insight solutions to power the customer marketing ecosystem. The team creates and manages various batch/Realtime marketing data products that fuels the Customer Marketing Platforms. Being part of the team, you will get numerous opportunities to utilize and learn bigdata and GCP cloud technologies. Job Responsibilities: Responsible for delivering the features or software functionality independently and reliably. Develop technical design documentation. Functions as core member of an agile team by contributing to software builds through consistent development practices with respect to tools, common components, and documentation. Performs hands-on ETL development for marketing data applications. Participate in code reviews and automated testing. Helps other junior members of the team deliver. Demonstrates analytical thinking - recommends improvements, best practices and conducts experiments to prove/disprove them Provides continuous support for ongoing application availability. Learns, understands, participates fully in all team ceremonies, including work breakdown, estimation, and retrospectives. Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software. High energy demonstrated, willingness to learn new technologies and takes pride in how fast they develop working software. Minimum Qualifications: Bachelor's Degree with minimum 4+ years of overall software design and development experience. Expert in SQL and Data warehousing concepts. Hands-on expertise with cloud platforms, ideally Google Cloud Platform (GCP) Working knowledge of data storage solutions like Big Query or Cloud SQL and data engineering tools like AirFlow or Cloud Workflows. Experience with other GCP services like Cloud Storage, Pub/Sub, or Data Catalog. Familiarity with Agile or other rapid application development methods. Hands on experience with one or more programming languages (Java, Python). Hands-on expertise with software development in Big Data (Hadoop, MapReduce, Spark, HIVE). Experience with CICD pipelines, Automated test frameworks, DevOps and source code management tools (XLR, Jenkins, Git, Sonar, Stash, Maven, Jira, Confluence, Splunk etc.). Knowledge of various Shell Scripting tools and ansible will be added advantage. Strong communication and analytical skills including effective presentation skills We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
You have experience developing microservices and cloud native apps using Java/J2EE, REST APIs, Spring Core, Spring MVC Framework, Spring Boot Framework JPA. You are proficient in building and deploying services using Gradle, Maven, Jenkins etc. as part of CI/CD process. Additionally, you have worked with Google Cloud Platform (GCP) and have experience in Pub Sub, Docker, and Kubernetes. You also have experience with any Relational Database such as Oracle, PostgreSQL, etc.,
Posted 5 days ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Join Us At Vodafone, we're not just shaping the future of connectivity for our customers - we're shaping the future for everyone who joins our team. When you work with us, you're part of a global mission to connect people, solve complex challenges, and create a sustainable and more inclusive world. If you want to grow your career whilst finding the perfect balance between work and life, Vodafone offers the opportunities to help you belong and make a real impact. What you'll do Conduct end-to-end impact assessments across all subject areas for new demands. Create and maintain comprehensive data architecture documentation including data models, flow diagrams, and technical specifications. Design and implement data pipelines integrating multiple sources, ensuring consistency and quality. Collaborate with business stakeholders to align data strategies with organisational goals. Support software migration and perform production checks. Govern the application of architecture principles within projects. Manage database refresh and decommissioning programmes while maintaining service availability. Ensure correct database configuration and documentation of infrastructure changes. Support third-level supplier engineering teams in root cause analysis and remediation. Propose system enhancements and innovative solutions. Who you are You are a detail-oriented and collaborative professional with a strong foundation in data architecture and cloud technologies. You possess excellent communication skills and are comfortable working with both technical and non-technical stakeholders. You are passionate about creating scalable data solutions and contributing to a culture of continuous improvement. What skills you need Strong knowledge of Teradata systems and related products. Proficient in SQL and data modelling concepts. Experience with GCP tools including Cloud Composer, BigQuery, Pub/Sub, and Cloud Functions. Proven ability to communicate complex data concepts effectively. Experience in IT infrastructure management environments. Ability to influence stakeholders and drive customer satisfaction. What skills you will learn Advanced cloud architecture and data governance practices. Cross-functional collaboration and stakeholder engagement. Innovation in data pipeline design and optimisation. Exposure to global BI projects and scalable data solutions. Enhanced leadership and decision-making capabilities. Not a perfect fit Worried that you don't meet all the desired criteria exactly At Vodafone we are passionate about empowering people and creating a workplace where everyone can thrive, whatever their personal or professional background. If you're excited about this role but your experience doesn't align exactly with every part of the job description, we encourage you to still apply as you may be the right candidate for this role or another opportunity. What's in it for you Who we are We are a leading international Telco, serving millions of customers. At Vodafone, we believe that connectivity is a force for good. If we use it for the things that really matter, it can improve people's lives and the world around us. Through our technology we empower people, connecting everyone regardless of who they are or where they live and we protect the planet, whilst helping our customers do the same. Belonging at Vodafone isn't a concept it's lived, breathed, and cultivated through everything we do. You'll be part of a global and diverse community, with many different minds, abilities, backgrounds and cultures. We're committed to increase diversity, ensure equal representation, and make Vodafone a place everyone feels safe, valued and included. If you require any reasonable adjustments or have an accessibility request as part of your recruitment journey, for example, extended time or breaks in between online assessments, please refer to for guidance. Together we can.
Posted 6 days ago
2.0 - 4.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. About Five9 Five9 is a leading provider of cloud software for the enterprise contact center market, powering billions of customer interactions annually. Since 2001, Five9 has been at the forefront of the cloud revolution, helping organizations deliver exceptional customer experiences, improve productivity, and achieve measurable business outcomes. We are growing our technology team in India, Bangalore and are looking for a Software Engineer Level 1 to contribute to the design, development, and support of cloud-native services within our global platform. This is an excellent opportunity for a motivated engineer to build hands-on experience in modern cloud technologies while learning from experienced mentors. This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States. Key Responsibilities Work as part of an agile team to build and enhance scalable microservices Develop backend services in Java and contribute to service APIs, data models, and integrations Collaborate with senior engineers to implement designs and ensure code quality Participate in code reviews, technical discussions, and knowledge-sharing sessions Apply best practices in testing, CI/CD, and automation to improve delivery speed and reliability Learn and apply principles of performance, scalability, and security in service development Support service monitoring, metrics, and troubleshooting under guidance from senior engineers Key Skills & Experience 2+ years of professional software engineering experience (internship or project experience welcome) Strong programming skills in Java (Spring/Spring Boot experience a plus) Understanding of object-oriented design and basic knowledge of data structures/algorithms Familiarity with SQL databases; exposure to NoSQL/cloud databases is a plus Willingness to learn containerization and cloud concepts (Kubernetes, GCP, or other cloud providers) Positive, self-motivated attitude with a strong desire to learn and grow in a collaborative environment Good communication skills and ability to work effectively in a team Technology & Tools Youll Work With Languages & Frameworks: Java, Spring, Spring Boot Databases: SQL, CockroachDB, Redis Cloud & Infrastructure: Kubernetes, GCP (GKE, Pub/Sub, BigQuery, Looker) DevOps & CI/CD: Git, GitLab CI/CD Collaboration & Documentation: Swagger, JIRA, Confluence, Slack Why Five9 At Five9, entry-level engineers are given the opportunity to work on meaningful projects from day one, while receiving mentorship from senior engineers and exposure to modern cloud technologies. Youll gain hands-on experience in building distributed systems, and grow your career in an environment that values innovation, learning, and impact. Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills.? The more inclusive we are, the better we are.? Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9. Show more Show less
Posted 6 days ago
4.0 - 6.0 years
0 Lacs
pune, maharashtra, india
On-site
About Company At Ishan Technologies , we are on a transformative journey to create future-ready, AI-first solutions that shape industries and empower enterprises. Our AI & Innovation Lab is the hub of our technology strategy, driving advanced R&D, cutting-edge AI/ML development, and high-impact product innovation. We are looking for passionate developers who thrive in building scalable systems and want to be part of an innovation-driven environment. Role Overview We are seeking a Python Developer with 4+ years of experience to join our AI & Innovation Research Lab. You will contribute to designing and implementing Python-based solutions that power AI and Intelligent Automation products. This role is ideal for someone who enjoys hands-on coding, solving complex problems, and working closely with AI/ML experts to bring research into scalable enterprise applications. Key Responsibilities Develop and optimize backend systems and APIs in Python for AI-driven products and platforms. Collaborate with AI/ML engineers, data scientists, and product teams to build efficient data processing workflows. Build modular, reusable Python components to support microservices and distributed architectures. Work with relational and non-relational databases (PostgreSQL, SQL, Firestore, NoSQL). Implement integrations with messaging systems (Kafka, RabbitMQ, Pub/Sub). Contribute to the design and development of RESTful APIs and asynchronous systems (FastAPI, Uvicorn). Write clean, testable, and maintainable code aligned with best practices (CI/CD, Docker, Git, unit testing). Participate in code reviews and actively contribute to improving code quality. Desired Skills & Experience 4+ years of professional Python development experience. Strong experience with backend frameworks (Django / FastAPI / Flask). Solid understanding of OOP, design patterns, and microservices architecture. Experience working with databases (SQL & NoSQL) and API integrations. Familiarity with cloud platforms (GCP/AWS/Azure) and containerization (Docker/Kubernetes) is a plus. Exposure to AI/ML workflows or data-intensive applications is an advantage. Strong problem-solving, debugging, and collaboration skills. Show more Show less
Posted 6 days ago
0.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Title : Python support Engineer Must-have skills: Monitor and maintain the availability and GKE based applications in high-pressure production environment. Respond to and resolve incidents and service requests related to application functionality and performance. Collaborate with development teams to troubleshoot and resolve technical issues in a timely manner. Document support processes , procedures, and troubleshooting steps for future reference. Participate in on-call rotation as well as in off-hours to provide after-hours support as needed. Communicate effectively with stakeholders to provide updates on issue resolution and status. Should have experience with monitoring tools and incident management systems . Ability to analyze logs, identify patterns, and trace system failures . Solid experience in SQL and database querying for debugging and reporting. Experience in monitoring/alerting tools on GCP. Good to have: Strong in Python , with production-level experience. Strong in FastAPI development and deployment practices. Worked in Google Kubernetes Engine ( GKE ) - including workload deployment, autoscaling, and tuning . Must have GCP exp in - Cloud Functions, Pub/Sub, Dataflow, Composer, Bigtable and Bigquery.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Job Description - We are seeking a skilled and motivatedData Engineerwith hands-on experience inGoogle Cloud Platform (GCP)andPython, and a strong understanding of theAetna Healthcare domain. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and solutions that support healthcare analytics, compliance, and operational reporting. Key Responsibilities : . Design and develop robust, scalable, and efficient data pipelines using GCP services (BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc.). . Write clean, maintainable Python code for data ingestion, transformation, and automation. . Collaborate with Analysts and business stakeholders to understand data requirements and deliver actionable insights. . Ensure data quality, integrity, and compliance with HIPAA and other healthcare regulations. . Optimize performance of data workflows and troubleshoot issues in production environments. . Leverage domain knowledge of Aetna's healthcare systems, claims processing, and member/provider data to enhance data solutions. . 5+ years of experience in data engineering with a focus on cloud platforms. . Strong proficiency in Python, SQL and Google Cloud Platform.
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
mumbai, maharashtra, india
On-site
About Us: Fluent Health is a dynamic healthcare startup revolutionizing how you manage your healthcare and that of your family. The company will provide customers with high-quality, personalized options, credible information through trustworthy content, and absolute privacy. To assist us in our growth journey, we are seeking a highly motivated and experienced Senior Data Engineer to play a pivotal role in future success. Company Website- https://fluentinhealth.com/ Job Description: Were looking for a Senior Data Engineer to lead the design, implementation, and optimization of our analytical and real-time data platform. In this hybrid role, youll combine hands-on data engineering with high-level architectural thinking to build scalable data infrastructure with ClickHouse as the cornerstone of our analytics and data warehousing strategy. Youll work closely with engineering, product, analytics, and compliance teams to establish data best practices, ensure data governance, and unlock insights for internal teams and future data monetization initiatives. Responsibilities: Architecture & Strategy: Own and evolve the target data architecture , with a focus on ClickHouse for large-scale analytical and real-time querying workloads. Define and maintain a scalable and secure data platform architecture that supports various use cases including real-time analytics, reporting, and ML applications. Set data governance and modeling standards , and ensure data lineage, integrity, and security practices are followed. Evaluate and integrate complementary technologies into the data stack (e.g., message queues, data lakes, orchestration frameworks). Data Engineering: Design, develop, and maintain robust ETL/ELT pipelines to ingest and transform data from diverse sources into our data warehouse. Optimize ClickHouse schema and query performance for real-time and historical analytics workloads. Build data APIs and interfaces for product and analytics teams to interact with the data platform. Implement monitoring and observability tools to ensure pipeline reliability and data quality. Collaboration & Leadership: Collaborate with data consumers (e.g., product managers, data analysts, ML engineers) to understand data needs and translate them into scalable solutions. Work with security and compliance teams to implement data privacy, classification, retention, and access control policies . Mentor junior data engineers and contribute to hiring efforts as we scale the team. Qualifications: 5-7 years of experience in Data Engineering , with at least 2-4 years in a Senior or Architectural role. Expert-level proficiency in ClickHouse or similar columnar databases (e.g., BigQuery, Druid, Redshift). Proven experience designing and operating scalable data warehouse and data lake architectures . Deep understanding of data modeling , partitioning , indexing , and query optimization techniques. Strong experience building ETL/ELT pipelines using tools like Airflow, dbt, or custom frameworks. Familiarity with stream processing and event-driven architecture (e.g., Kafka, Pub/Sub). Proficiency with SQL and at least one programming language like Python , Scala , or Java . Experience with data governance , compliance frameworks (e.g., HIPAA, GDPR), and data cataloging tools. Knowledge of real-time analytics use cases and streaming architectures. Familiarity with machine learning pipelines and integrating data platforms with ML workflows. Experience working in regulated or high-security domains like Healthtech , Fintech , or Enterprise SaaS. Show more Show less
Posted 1 week ago
10.0 - 12.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Description Deep technical skills: Hands-on coding, debugging knowledge in Java, J2EE, Spring boot microservices, Spring batch, Postgres, Redis, GraphQL with knowledge of cloud platforms preferably GCP. GCP: Cloud Build and Cloud Run, Secret Manager, Pub Sub, Schedulers Code Quality Tools: Fossa, SonarQube, Checkmarx, Cycode, 42Crunch Strong team leadership: Mentorship, code reviews, support. Proactive risk management: Identifying and mitigating technical risks. Delivery focus: Meeting sprint goals, high-quality code. Positive team attitude: Collaboration, knowledge sharing & Effective communication and ability to work in a large diverse team. Experience of Software Engineering Craftmanship techniques and best practices Practical understanding / usage of version control systems (Git/GitHub) and CI/CD tools (Cloud Build, Tekton) Experience in API automation tool Newman and Jmeter. Responsibilities Experience piloting new technologies and designing implementation strategies Experience designing and implementing enterprise best practices regarding existing or new technology/tooling Experience of senior responsibilities including: Dev Code Reviews Change management Building technical roadmaps/backlogs Exposure or experience in the following Skills and Techniques: Agile/PDO Ceremonies People & Skills Coaching Coordination and logistical planning Business focused cascades of technical strategies and/or roadmaps Experience using Test Driven Development (TDD) and Behaviour Driven Development (BDD) Qualifications B.E / BTech in Computer Science with 10+yrs of Experience in software development with Java/J2EE & GraphQL with knowledge of cloud platforms preferably GCP. Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Job Description: Qualifications: Bachelor&aposs degree in Engineering or related technical field with at least 3 years L30 / 6 years L35 / 9 years L40 of hands-on experience as a Data Engineer, Data Architect or related roles Experience working on Snowflake or Google Cloud Platform (GCP) especially services like BigQuery, Cloud Storage, Dataflow, Cloud Functions, and Pub/Sub Proficiency in Talend for complex ETL workflows, Fivetran for automated data pipeline build with understanding of modern ELT patterns and real-time data streaming concepts Advanced SQL skills including complex queries, stored procedures, etc.; Python with experience in data manipulation libraries and PySpark for large-scale data processing Understanding of REST API, building and consuming APIs for data ingestion and knowledge of API authentication methods Hands-on experience with Databricks for collaborative analytics or Notebooks of similar interactive development environments Understanding of data governance, quality, and lineage concepts; data security and compliance requirements (GDPR, CCPA) and knowledge of data warehouse modeling techniques Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 week ago
8.0 - 13.0 years
20 - 30 Lacs
noida
Hybrid
Job Description - Java 8+ - Spring Boot - Hibernate - JPA - RESTful API - Microservices - Kafka - GCP Pub-sub - GKE - MySQL or Postgres Location - Noida - Hybrid Mode ( 3 days from offc) Immediate joiners preferred only as we need to close it by 15-Sep-25 max. Please share CVs at ankit.kumar@celsiortech.com
Posted 1 week ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
About VOIS: VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group's partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India: In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description About this Role We are seeking a skilled Data Engineer to design and maintain scalable data pipelines and ETL processes using Google Cloud Platform (GCP) services. The individual will collaborate with cross-functional teams to deliver high-quality data solutions, ensuring data integrity and performance optimisation. This role is ideal for someone with a strong foundation in data engineering and a passion for continuous improvement in a dynamic environment. What you will do Design, develop, and maintain scalable data pipelines and ETL processes using GCP services such as BigQuery, Cloud Data Fusion, Dataflow, Pub/Sub, Cloud Storage, Composer, Cloud Function, and Cloud Run. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver robust solutions. Implement data integration solutions to ingest, process, and store structured and unstructured data from diverse sources. Optimise and tune data pipelines for performance, reliability, and cost-efficiency. Ensure data quality through validation, cleansing, and transformation processes. Develop and maintain data models, schemas, and metadata to support analytics and reporting. Monitor and troubleshoot data pipeline issues to ensure minimal disruption. Stay current with GCP technologies and best practices, recommending improvements. Mentor junior data engineers and promote a culture of collaboration and knowledge sharing. Who you are Holds a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Experienced with SQL and NoSQL databases. Knowledgeable in data warehousing concepts and best practices. Familiar with data integration tools and frameworks. Demonstrates excellent problem-solving and analytical skills. Strong communicator and effective collaborator. Comfortable working in a fast-paced and dynamic environment. VOIS Equal Opportunity Employer Commitment India: VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we'll be in touch!
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
gurgaon, haryana, india
On-site
MLOps We are looking for a highly skilled Analytics & Data Engineering professional with a strong background in Machine Learning, MLOps, and DevOps . The ideal candidate will have experience designing and implementing scalable data and analytics pipelines, enabling production-grade ML systems, and supporting agent-based development leveraging MCP/OpenAPI to MCP wrapper and A2A protocols . This role combines hands-on technical work with solution design, and will require close collaboration with data scientists, product teams, and engineering stakeholders. Key Responsibilities Design, build, and maintain scalable data pipelines and ETL/ELT processes for analytics and ML workloads. Implement MLOps frameworks to manage model lifecycle (training, deployment, monitoring, and retraining). Apply DevOps best practices (CI/CD, containerization, infrastructure as code) to ML and data engineering workflows. Develop and optimize data models, feature stores, and ML serving architectures. Collaborate with AI/ML teams to integrate models into production environments. Support agent development using MCP/OpenAPI to MCP wrapper and A2A (Agent-to-Agent) communication protocols. Ensure data quality, governance, and compliance with security best practices. Troubleshoot and optimize data workflows for performance and reliability. Required Skills & Experience Core : 6+ years in analytics and data engineering roles. Proficiency in SQL, Python, and data pipeline orchestration tools (e.g., Airflow, Prefect). Experience with distributed data processing frameworks (e.g., Spark, Databricks). ML/MLOps : Experience deploying and maintaining ML models in production. Knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). DevOps : Hands-on experience with CI/CD (Jenkins, GitHub Actions, GitLab CI). Proficiency with Docker, Kubernetes, and cloud-based deployment (AWS, Azure, GCP). Specialized : Experience with MCP/OpenAPI to MCP wrapper integrations. Experience working with A2A protocols in agent development. Familiarity with agent-based architectures and multi-agent communication patterns. Preferred Qualifications Master's degree in Computer Science, Data Engineering, or related field. Experience in real-time analytics and streaming data pipelines (Kafka, Kinesis, Pub/Sub). Exposure to LLM-based systems or intelligent agents. Strong problem-solving skills and ability to work in cross-functional teams.
Posted 2 weeks ago
4.0 - 7.0 years
0 Lacs
india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity. ? Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory skill sets: GCP, Big query , Data Proc Preferred skill sets: GCP, Big query , Data Proc, Airflow Years of experience required: 4-7 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation + 18 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
pune, maharashtra, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience :2-4 years in Data Engineering Job Description : We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS : Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure : Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - GCP : Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 2-4 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure, or GCP) is a plus. - Familiarity with Spark Streaming is a bonus. Mandatory skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Years of experience required: 2-4 years Education qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills PySpark, Python (Programming Language), Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Artificial Intelligence, Big Data, C++ Programming Language, Communication, Complex Data Analysis, Data-Driven Decision Making (DIDM), Data Engineering, Data Lake, Data Mining, Data Modeling, Data Pipeline, Data Quality, Data Science, Data Science Algorithms, Data Science Troubleshooting, Data Science Workflows, Deep Learning, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Machine Learning + 12 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Role: GCP Data Engineer Experience: 5-9 Years Notice: 15 Days less Interview Mode: First Round Virtual/ Second Round Face to face (Mandate) Location: Bangalore Job Description Data Ingestion, Storage, Processing and Migration Acquire, cleanse, and ingest structured and unstructured data on the cloud platforms (in batch or real time) from internal and external data sources Combine data from disparate sources in to a single, unified, authoritative view of data (e.g., Data Lake) Create, maintain and provide test data to support fully automated testing Enable and support data movement from one system / service to another system / service. Reporting : Design, Develop and maintain high performance Look ML models that provide comprehensive data visibility across business function. Build interactive dashboards and data visualization that tell compelling stories and drive decision making. Stay up to date with the latest Looker Features and best practices ,Sharing your knowledge with the team. Skills & Software Requirements: GCP data services (Big Query; Dataflow; Data Fusion; Data proc; Cloud Composer; Pub/Sub; Google Cloud Storage; Looker; Look ML) Programming languages e.g., Python, Java, SQL Show more Show less
Posted 2 weeks ago
7.0 - 9.0 years
0 Lacs
bengaluru, karnataka, india
Remote
At ABB, we help industries outrun - leaner and cleaner. Here, progress is an expectation - for you, your team, and the world. As a global market leader, we'll give you what you need to make it happen. It won't always be easy, growing takes grit. But at ABB, you'll never run alone. Run what runs the world. This Position reports to: Technology Product Manager Your role and responsibilities (Mandatory) ABB Ability Genix APM Suite- APM Predict As Technology Product Manager you will own the full product life-cycle for this edge-native, browser-based condition-monitoring workspace that unifies mechanical, electrical, instrumentation and process data into a single dashboard view. You will translate market needs into a winning roadmap, orchestrate cross-functional delivery in collaboration with software engineers, designers, developers, and architect and ensure every release delights users-whether deployed on-premises, in the cloud or in air-gapped environments. In this role, you will have the opportunity to act as a systems thinker and a bridge between identifying the functional product requirements and the corresponding ways to build and competitively position the software modules, applications, sub systems, and platforms. You will also showcase your expertise by demonstrating a strong understanding and technical expertise in all aspects of software engineering. The work model for the role is: Onsite This role is contributing to the ABB Ability Genix APM Suite- APM Predict Product in Process Automation Digital Division in Bangalore . You will be mainly accountable for: Maintaining the product area technology roadmap and interoperability of various moving parts across the software lifecycle. Articulating the software technology roadmap including goals, new software capabilities, release plans, milestones, resource mapping, and risk factors together with status reports. Continuing to experiment newer technologies in the market and finds out solutions that are suitable to integrate into ABB's products. Identifying, prioritizing, and advocating for the largest opportunities and creates and manages new efficiency opportunities from their ideation till launch. Systems Thinking & Prioritisation - distilling sprawling requirement sets into clear release slices. Structured Communication - creating crisp PRDs, migration guides, release notes drives documentation gaps closure. Resilience & Patience - steering large bug backlogs and late-stage regression fixes Influence Without Authority - aligning firmware, models, UI/UX, DevOps, security and documentation teams. Quality Mind-set - enforces test coverage, sizing calculations, upgrade paths and disaster-recovery stories. Stakeholder Management - Proven ability to defuse conflict and align demanding cross-functional stakeholders through active listening, empathy, and data-driven negotiation. Qualifications for the role Experience: 7+ years in product management, solution architecture or release engineering for industrial analytics, SCADA/HMI or edge platforms. Domain: Hands-on with condition-monitoring for electrical, rotating or instrumentation assets familiarity with OPC UA Pub/Sub, 800xA Connect, Modbus TCP. Deployment: Proven delivery on single-node OpenShift/MicroShift with Helm & Ansible automation, mirror-registry workflows and atomic rollback. Data & UX: Comfortable defining APIs/SQL schemas and wire-framing intuitive, web-based dashboards accessible from any browser, on-premises or remote. Education: B.E./B.Tech in Electrical, Electronics, Instrumentation or related field (M.Tech or MBA preferred). What's in it for you Contribute for a flagship product. You will shape the roadmap for ABB Ability Genix APM Predict, a core offering in ABB's digital portfolio that is already improving reliability and efficiency in energy, process and hybrid industries worldwide. End-to-end responsibility. From customer insights and technical design to release and adoption, you will steer every stage of the product life-cycle and see the direct impact of your decisions. Modern, scalable tech stack. Work hands-on with single-node OpenShift, microservices, OPC UA Pub/Sub, time-series databases and advanced analytics-skills that are in high demand across the industry. Cross-disciplinary growth. Engage daily with data scientists, UX designers, DevOps engineers and domain specialists in electrical, rotating and instrumentation assets, broadening both your technical and industry expertise. Global collaboration network. Tap into ABB's worldwide engineering community, remote-monitoring centres and life-cycle service teams-giving you the resources and support to deliver ambitious features at scale. Visible, measurable impact. Predict's modular licensing and quick deployment model mean new capabilities reach customers in weeks, allowing you to track adoption, performance and business value almost in real time. Career advancement. This role sits in ABB's fast-growing Process Automation - Digital business, offering suitable opportunities in future. Inclusive culture and rewards. Benefit from competitive compensation, a commitment to diversity and the stability of a 135-year-old global technology leader. More about us ABB provides a comprehensive range of integrated automation, electrical and digital systems and services for customers in the process, hybrid and maritime industries. These offerings, coupled with deep domain knowledge in each end market, help to optimize productivity, energy efficiency, sustainability and safety of industrial processes and operations We value people from different backgrounds. Apply today for your next career step within ABB and visit www.abb.com to learn about the impact of our solutions across the globe. #MyABBStory It has come to our attention that the name of ABB is being used for asking candidates to make payments for job opportunities (interviews, offers). Please be advised that ABB makes no such requests. All our open positions are made available on our career portal for all fitting the criteria to apply. ABB does not charge any fee whatsoever for recruitment process. Please do not make payments to any individuals /entities in connection to recruitment with ABB, even if is claimed that the money is refundable. ABB is not liable for such transactions. For current open positions you can visit our career website https://global.abb/group/en/careers and apply. Please refer to detailed recruitment fraud caution notice using the link https://global.abb/group/en/careers/how-to-apply/fraud-warning We value people from different backgrounds. Could this be your story Apply today or visit www.abb.com to read more about us and learn about the impact of our solutions across the globe. Fraud Warning: Any genuine offer from ABB will always be preceded by a formal application and interview process. We never ask for money from job applicants. For current open positions you can visit our career website and apply. Please refer to detailed recruitment fraud caution notice using the link .
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
powai, maharashtra, india
On-site
ABOUT GENERAL MILLS We make foodthe world loves: 100 brands. In 100 countries. Across six continents. With iconic brands like Cheerios, Pillsbury, Betty Crocker, Nature Valley, and Hagen-Dazs, we've been serving up food the world loves for 155 years (and counting). Each of our brands has a unique story to tell. How we make our food is as important as the food we make. Our values are baked into our legacy and continue to accelerate us into the future as an innovative force for good. General Mills was founded in 1866 when Cadwallader Washburn boldly bought the largest flour mill west of the Mississippi. That pioneering spirit lives on today through our leadership team who upholds a vision of relentless innovation while being a force for good. For more details check out General Mills India Center (GIC) is our global capability center in Mumbai that works as an extension of our global organization delivering business value, service excellence and growth, while standing for good for our planet and people. With our team of 1800+ professionals, we deliver superior value across the areas of Supply chain (SC) , Digital & Technology (D&T) Innovation, Technology & Quality (ITQ), Consumer and Market Intelligence (CMI), Sales Strategy & Intelligence (SSI) , Global Shared Services (GSS) , Finance Shared Services (FSS) and Human Resources Shared Services (HRSS).For more details check out We advocate for advancing equity and inclusion to create more equitable workplaces and a better tomorrow. JOB OVERVIEW Function Overview The Digital and Technology team at General Mills stands as the largest and foremost unit, dedicated to exploring the latest trends and innovations in technology while leading the adoption of cutting-edge technologies across the organization. Collaborating closely with global business teams, the focus is on understanding business models and identifying opportunities to leverage technology for increased efficiency and disruption. The team's expertise spans a wide range of areas, including AI/ML, Data Science, IoT, NLP, Cloud, Infrastructure, RPA and Automation, Digital Transformation, Cyber Security, Blockchain, SAP S4 HANA and Enterprise Architecture. The MillsWorks initiative embodies an agile@scale delivery model, where business and technology teams operate cohesively in pods with a unified mission to deliver value for the company. Employees working on significant technology projects are recognized as Digital Transformation change agents. The team places a strong emphasis on service partnerships and employee engagement with a commitment to advancing equity and supporting communities. In fostering an inclusive culture, the team values individuals passionate about learning and growing with technology, exemplified by the Work with Heart philosophy, emphasizing results over facetime. Those intrigued by the prospect of contributing to the digital transformation journey of a Fortune 500 company are encouraged to explore more details about the function through the provided Purpose of the role The Enterprise Data Development team is responsible for designing and architecting solutions to integrate and transform business data into a data warehouse, delivering a data layer fo r the Enterprise using cutting-edge cloud technologies like GCP. We design solutions to meet the expanding need for more and more internal/external information to be integrated with existing sources research, implement, and leverage new technologies to deliver more actionable insights to the enterprise. We integrate solutions that combine process, technology landscapes, and business information from the core enterprise data sources that form our corporate information factory to provide end-to-end solutions for the business. KEY ACCOUNTABILITIES Design, create, code, and support a variety of data pipelines and models on any cloud technology (GCP preferred) Partner with business analysts, architects, and other key project stakeholders to deliver business initiatives Seeks to learn new skills, mentor newer team members, build domain expertise, and document processes Actively builds knowledge of D&T resources, people, and technology Participate in the evaluation, implementation, and deployment of emerging tools & processes in the big data space Collaboratively troubleshoot technical and performance issues in the data space Leans into ambiguity and partners with others to find solutions Ability to identify opportunities to contribute work to the broader GMI data community Ability to manage multiple stakeholders, tasks, and navigate through ambiguity & complexity Able to lead small projects/initiatives and contribute/lead effectively to the implementation of enterprise projects. Support existing Data warehouses & related jobs. Familiarity with real-time and streaming data processes Proactive research into up-to-date technology or techniques for development Should have an automation mindset to embrace a Continuous Improvement mentality to streamline & eliminate waste in all processes MINIMUM QUALIFICATIONS Identified as the technical /project lead for global projects and have 5+ years of total experience in ETL/Data Space with 2+ years of relevant experience in the Cloud Space Actively coaches and mentors team of developers while proactively identifying potential issues/deadline slippage /opportunities in projects/tasks and takes timely decisions Demonstrates a strong affinity towards paying attention to details and delivery accuracy with strong analytical skills and communication (verbal & written) Collaborates with the business stakeholders and develops strong working relationships Self-motivated team player and should have the ability to overcome challenges and achieve desired results Expert level of experience in Cloud (Storage, Modeling, real time), Data Storage (S3/Blob Storage), Big Query, SQL, Composer, Cloud Functions (Lambda/Azure Function), Data Warehousing Intermediate level of experience with Python, Kafka, Pub/Sub, dBT
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
pune, maharashtra, india
On-site
About VOIS: VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group's partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India: In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Job Summary: We are seeking a highly skilled and experienced Senior GCP Data Engineer to join our dynamic team. The ideal candidate will have 4 to 6 years of experience in data engineering, with a strong focus on Google Cloud Platform (GCP). This role involves designing, developing, and maintaining scalable data pipelines and systems to support our data-driven initiatives. Key Responsibilities: . Design, develop, and maintain scalable data pipelines and ETL processes using GCP services such as BigQuery, Cloud Data Fusion, Dataflow, Pub/Sub, Cloud Storage, Composer ,Cloud Function, Cloud RUN . Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. . Implement data integration solutions to ingest, process, and store large volumes of structured and unstructured data from various sources. . Optimize and tune data pipelines for performance, reliability, and cost-efficiency. . Ensure data quality and integrity through data validation, cleansing, and transformation processes. . Develop and maintain data models, schemas, and metadata to support data analytics and reporting. . Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to data workflows. . Stay up-to-date with the latest GCP technologies and best practices, and provide recommendations for continuous improvement. . Mentor and guide junior data engineers, fostering a culture of knowledge sharing and collaboration. Qualifications: . Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. . 3+ years of experience in data engineering, with a strong focus on GCP. . Proficiency in GCP services such as BigQuery, Cloud Data Fusion, Dataflow, Pub/Sub, Cloud Storage, Composer ,Cloud Function, Cloud RUN. . Strong programming skills in Python, PLSQL. . Experience with SQL and NoSQL databases. . Knowledge of data warehousing concepts and best practices. . Familiarity with data integration tools and frameworks. . Excellent problem-solving and analytical skills. . Strong communication and collaboration skills. . Ability to work in a fast-paced, dynamic environment. Preferred Qualifications: . GCP certification (e.g., Professional Data Engineer). . Experience with machine learning and data science workflows. . Knowledge of DevOps practices and tools for CI/CD. Benefits: . Competitive salary and benefits package. . Opportunity to work with cutting-edge technologies and innovative projects. . Collaborative and inclusive work environment. . Professional development and growth opportunities. VOIS Equal Opportunity Employer Commitment India: VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we'll be in touch!
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Develop and implement Generative AI / AI solutions on Google Cloud Platform Work with cross-functional teams to design and deliver AI-powered products and services Work on developing, versioning and executing Python code Deploy models as endpoints in Dev Environment Solid understanding of python Deep learning frameworks such as TensorFlow, PyTorch, or JAX Natural language processing (NLP) and machine learning (ML) Cloud storage, compute engine, VertexAI, Cloud Function, Pub/Sub, Vertex AI etc Generative AI support in Vertex, specifically handson experience with Generative AI models like Gemini, vertex Search etc Your Profile Experience in Generative AI development with Google Cloud Platform Experience in delivering an AI solution on VertexAI platform Experience in developing and deploying AI Solutions with ML What you'll love about working here You can shape yourwith us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theon one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
pune, maharashtra, india
On-site
Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills: Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India
Posted 3 weeks ago
7.0 - 9.0 years
0 Lacs
pune, maharashtra, india
On-site
Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist. In this role, you will: Design, develop, and deploy Java-based data pipelines using GCP technologies like Dataflow, Pub/Sub, and BigQuery, BigTable. Collaborate with cross-functional pods to gather requirements and implement data solutions. Ensure the performance, scalability, and reliability of data pipelines, handling heavy loads. Implement industry development tools, principles, and best practices to ensure high-quality code. Debug and troubleshoot issues in data pipelines and provide timely resolutions. Support testing efforts and ensure the accuracy and integrity of data in the pipelines. Stay up-to-date with the latest trends and technologies in data engineering and cloud platforms. Document processes, procedures, and code for future reference. Requirements To be successful in this role, you should meet the following requirements: Bachelor Degree in Computer Science or related disciplines 7 or more years of hands-on development experience as a Java Data Engineer or similar role, with a focus on GCP, Dataflow, Pub/Sub, BigQuery, and BigTable. Strong knowledge of Java programming and experience working on data pipelines with heavy loads. Ability to work with geographically distributed and cross-functional teams Working knowledge of industry development tools and principles. Familiarity with banking or relevant industry processes and regulations is a plus. Knowledge of developing applications hosted on cloud platforms like GCP or any relevant cloud platform. Excellent problem-solving and communication skills. Ability to work independently and in a team, with a flexible and adaptable mindset. Strong attention to detail and ability to prioritize tasks effectively. You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India
Posted 3 weeks ago
4.0 - 6.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Dear Candidates, HCL is hiring for Data Engineer who is expert in GCP & DBT. Interested candidates kindly share your updated resume to [HIDDEN TEXT] Responsibilities: Take end-to-end responsibility to build, optimize and support of existing and new data products towards the defined target vision Be a champion of DevOps mindset and principles and able to manage CI/CD pipelines and terraform as well as Cloud infrastructure, in our context, it is GCP (Google Cloud Platform). Ensure that our built data products work as independent units of deployment and non-functional aspects of the data products follow the defined standards for security, scalability, observability, and performance. Work close to the Product Owner and other stakeholders around vision for existing data products and identifying new data products to support our customer needs Work with product teams within and outside our domain around topics that relate to the data mesh concept. Evaluate and drive continuous improvement and reducing technical debt in the teams Maintain expertise in latest data/analytics and cloud technologies Qualifications: Passion for Data, people, and technology At least 4+ years work experience including hands-on as either: Data engineer on modern cloud data platforms /or advanced analytics environments, Software Engineer with cloud technologies and infrastructure Experience in different data formats (Avro, Parquet)Experience in data query languages (SQL or similar) Experience in data centric programming using one of more programming languages Python, Java /or Scala. Good understanding of different data modelling techniques and trade-off Knowledge of NoSQL and RDBMS databases Have a collaborative and co-creative mindset with excellent communication skills Motivated to work in an environment that allows you to work and take decisions independently Experience in working with data visualization tools Fluent in English both written and verbal Tech stack GCP Services (Big Query, Cloud Run, Cloud Functions, Pub/Sub, Cloud Composer etc.) SQL DBT Python Terraform (Power BI) Show more Show less
Posted 3 weeks ago
6.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Summary We are seeking a highly skilled and motivated GCP Data Engineering Manager to join our dynamic team. As a Data Engineering manager specializing in Google Cloud Platform (GCP), you will play a crucial role in designing, implementing, and maintaining scalable data pipelines and systems. You will leverage your expertise in Google Big Query, SQL, Python, and analytical skills to drive data-driven decision-making processes and support various business functions. About The Role Key Responsibilities: Data Pipeline Development: Design, develop, and maintain robust data pipelines using GCP services like Dataflow, Dataproc, ensuring high performance and scalability. Google Big Query Expertise: Utilize your hands-on experience with Google Big Query to manage and optimize data storage, retrieval, and processing. SQL Proficiency: Write and optimize complex SQL queries to transform and analyze large datasets, ensuring data accuracy and integrity. Python Programming: Develop and maintain Python scripts for data processing, automation, and integration with other systems and tools. Data Integration: Collaborate with data analysts, and other stakeholders to integrate data from various sources, ensuring seamless data flow and consistency. Data Quality and Governance: Implement data quality checks, validation processes, and governance frameworks to maintain high data standards. Performance Tuning: Monitor and optimize the performance of data pipelines, queries, and storage solutions to ensure efficient data processing. Documentation: Create comprehensive documentation for data pipelines, processes, and best practices to facilitate knowledge sharing and team collaboration. Minimum Qualifications Proven experience (minimum 6 8 yrs) in Data Engineer, with significant hands-on experience in Google Cloud Platform (GCP) and Google Big Query. Proficiency in SQL for data transformation, analysis and performance optimization. Strong programming skills in Python, with experience in developing data processing scripts and automation. Proven analytical skills with the ability to interpret complex data and provide actionable insights. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team enviro Desired Skills Experience with Google Analytics data and understanding of digital marketing data. Familiarity with other GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Dataproc. Knowledge of data visualization tools such as Looker, Tableau, or Data Studio. Experience with machine learning frameworks and libraries. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients lives. Ready to create a brighter future together https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways well help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Show more Show less
Posted 4 weeks ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |