Jobs
Interviews

24278 Etl Jobs - Page 24

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Details: Position Title: Power BI Developer Experience Required: 5+ years Location: Marathahalli, Bangalore Employer: Global Product Company - Established 1969 Why Join Us? Be part of a global product company with over 50 years of innovation. Work in a collaborative and growth-oriented environment. Help shape the future of digital products in a rapidly evolving industry. Required Skills & Abilities Strong command of Power BI —building visually appealing dashboards and reports Proficient in SQL , data modeling, and ETL processes Solid understanding of DevOps , CI/CD pipelines , and version control tools (e.g., Git) Experience with Jira or other issue tracking tools Expertise in testing frameworks and debugging techniques Strong analytical and problem-solving skills Ability to read and work with code written by others Clear documentation and communication abilities Experience supporting deployments and providing end-user training

Posted 4 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 4 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : mSQL,SQL Writing,PLSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Responsibilities Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Preferred Skills (Nice to Have) Experience with ETL processes and data warehousing. Knowledge of cloud-based databases (AWS RDS, Google BigQuery, Azure SQL). Familiarity with database performance tuning and indexing strategies. Exposure to Python or other scripting languages for database automation. Experience with business intelligence (BI) tools like Power BI or Tableau. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary We are looking for a proactive and detail oriented BI Administrator with 3 to 7 years of experience supporting and managing enterprise BI platforms including SAP BusinessObjects, Tableau and Microsoft Power BI. The ideal candidate will ensure platform stability provide user support and collaborate with cross functional teams to enable data driven decision making. Key Responsibilities · Administer and support SAP BusinessObjects, Tableau Server, and Microsoft Power BI Service environments. · Monitor platform performance availability and usage troubleshoot and resolve technical issues. · Manage user access roles and security configurations across BI tools. · Assist report developers and business users with publishing scheduling and optimizing dashboards and reports. · Coordinate with infrastructure data engineering and security teams to ensure platform reliability and compliance. · Perform upgrades patches and backups for BI platforms. · Document support procedures configurations and best practices. · Provide training and technical support to end users and stakeholders. Required Skills Qualifications · Bachelor’s degree in computer science information systems or a related field. · 3 to 7 years of experience in BI platform support and administration. · Handson experience with SAP BusinessObjects, Tableau Server, and Power BI Service. · Familiarity with authentication protocols e.g., Active Directory SAML OAuth. · Good SQL skills and understanding of data modeling and ETL processes. · Experience with scripting PowerShell, VB Script and Python for automation and monitoring. · Excellent communication and problem-solving skills. Preferred Qualifications · Exposure to ITIL practices and tools like ServiceNow or Jira. · Certifications in SAP BusinessObjects, Tableau or Power BI. · Experience with cloud platforms Azure or AWS or GCP.

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

Role: Support Specialist L3 Location: India About the Operations Team : Includes the activities, processes and practices involved in managing and maintaining the operational aspects of an organization’s IT infrastructure and systems. It focuses on ensuring the smooth and reliable operation of IT services, infrastructure components and supporting systems in the Data & Analytics area. Duties Description: Provide expert service support as L3 specialist for the service. Identify, analyze, and develop solutions for complex incidents or problems raised by stakeholders and clients as needed. Analyze issues and develop tools and/or solutions that will help enable business continuity and mitigate business impact. Proactive and timely update assigned tasks, provide response and solution within agreed team's timelines. Problem corrective action plan proposals. Deploying bug-fixes in managed applications. Gather requirements, analyze, design and implement complex visualization solutions Participate in internal knowledge sharing, collaboration activities, and service improvement initiatives. Tasks may include monitoring, incident/problem resolution, documentations, automation, assessment and implementation/deployment of change requests. Provide technical feedback and mentoring to teammates Requirements: Willing to work either ASIA, EMEA, or NALA shift Strong problem-solving, analytical, and critical thinking skills. Strong communication skillset – ability to translate technical details to business/non-technical stakeholders Extensive experience with SQL, T-SQL, PL/SQL Language – includes but not limited to ETL, merge, partition exchange, exception and error handling, performance tuning. Experience with Python/Pyspark mainly with Pandas, Numpy, Pathlib and PySpark SQL Functions Experience with Azure Fundamentals, particularly Azure Blob Storage (File Systems and AzCopy). Experience with Azure Data Services - Databricks and Data Factory Understands the operation of ETL process, triggers and scheduler Logging, dbutils, pyspark SQL functions, handling different files e.g. json Experience with Git repository maintenance and DevOps concepts. Familiarity with building, testing, and deploying process. Nice to have: Experience with Control-M (if no experience, required to learn on the job) KNIME Power BI Willing to be cross-trained to all of the technologies involved in the solution We offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. “Office as an option” model. You can choose to work remotely or in the office. Flexibility regarding working hours and your preferred form of contract. Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. If you are interested in this position, please apply on the link given below. Application Link

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

Role: BI Tech Lead Location: India (Remote) About Lingaro: Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data. Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team. Website: https://lingarogroup.com/ Duties Description: Provide technical guidance, mentorship, and support to team members. Coordinate and collaborate with Back-end and Front-end BI Developers teams during reporting solutions building and implementation phases. Coordinate with Project Managers and Business Analysts to ensure alignment of technical solutions with Customer`s requirements. Collaborate with development teams to define technical requirements and specifications for reporting solutions. Lead the design and architecture of technical solutions for Business intelligence reporting. Review and approve technical designs and ensure adherence to technical quality criteria. Confirm that delivered solutions meet technical quality criteria, including code quality, security, performance, and scalability. Conduct regular code reviews and provide feedback to ensure adherence to best development practices. Participate in project planning and estimation activities. Communicate technical aspects of the solution to stakeholders and customers in a clear and concise manner. Stay updated with the latest industry trends, emerging technologies, and best practices in the business intelligence domain. Share knowledge and expertise with team members through training sessions, workshops, or documentation. Mentor team members to enhance their technical skills and capabilities. Requirements: Experience in leading and coordinating teams of Back-end and Front-end BI Developers. Strong knowledge of business intelligence concepts, methodologies, and tools. Strong experience with Looker Proficiency in back-end development technologies such as Google BigQuery including as well database management, and ETL processes implementation. Proficiency with data integration techniques and tools. Understanding of data modeling and data warehousing principles. Knowledge of data security and privacy best practices. In-depth knowledge related to performance optimization techniques. Strong problem-solving and decision-making skills to resolve technical issues and challenges. Ability to provide technical guidance, mentorship, and support to team members. Nice to have : Tableau knowledge Why join us: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. 100% remote. Flexibility regarding working hours. Full-time position Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. If you are interested in the position, please create an application for us in the link below. Application Link

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon Payroll: BCforward Work Mode: Hybrid JD Skills: Big Data; ETL - Big Data / Data Warehousing; GCP; Adobe Experience Manager (AEM) Primary Skills : GCP, Adobe suit (like AEP, CJA, CDP), SQL, Big data, Python Secondary Skills : Airflow, Hive, Spark, Unix Shell scripting , Data warehousing concept Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best 👍

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. · Job Title: ETL Developer · Location: pune(Hybrid) · Experience: 6+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: Having strong experience and knowledge in PL/SQL Oracle, Linux,Analysis,Design,Testing,Trouble shooting.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.

Posted 4 days ago

Apply

0 years

0 Lacs

India

On-site

Caprae Capital Partners is an innovative private equity firm led by the principal Kevin Hong who has been a serial tech entrepreneur, and who grew two startups to $31M ARR and $7M in revenue. The fund originated with two additional tech entrepreneur friends of Kevin who have had ~8 figure and ~9 figure exits to Twitter and Square, respectively. Additional partners include an Ex-Nasa software engineer and an Ex-Chief of Staff from Google. Caprae Capital in conjunction with its portfolio company launched AI-RaaS (AI Readiness as a Service) and is looking for teammates to join for the long haul If you have a passion for disrupting the finance industry and happen to be a mission-driven person, this is a great fit for you. Additionally, given the recent expansion of this particular firm, you will have the opportunity to work from the ground level and take on a leadership role for the internship program which would result in a paid role. Lastly, this is also a great role for those who are looking into strategy and consulting roles in the future as it will give you the exposure and experience necessary to develop strong business acumen. Role Overview We are looking for a Lead Full Stack Developer to architect and lead the development of new features for SaaSquatchLeads.com, an AI-driven lead generation and sales intelligence platform. You will own technical direction, guide other engineers, and ensure our stack is scalable, maintainable, and optimized for AI-powered workloads. Key Responsibilities Lead architectural design and technical strategy for SaaSquatchLeads.com. Develop, deploy, and maintain end-to-end features spanning frontend, backend, and AI integrations. Implement and optimize AI-driven services for lead scoring, personalization, and predictive analytics. Build and maintain data pipelines for ingesting, processing, and analyzing large datasets. Mentor and guide a distributed engineering team, setting best coding practices . Collaborate with product, design, and data science teams to align technical execution with business goals. Ensure security, performance, and scalability of the platform. Required Skills & Technologies Frontend: React, JavaScript (ES6+), TypeScript, Redux/Zustand, HTML, CSS, TailwindCSS. Backend: Python (Flask, FastAPI, Django), Node.js (bonus). AI & Data Science: Python, PyTorch, Hugging Face, OpenAI APIs, LangChain, Pandas, NumPy. Databases: PostgreSQL, MySQL, MongoDB, Redis. DevOps & Infrastructure: Docker, Kubernetes, AWS (Lambda, S3, RDS, EC2), CI/CD pipelines. Data Processing: ETL tools, message queues (Kafka, RabbitMQ). Search & Indexing: Elasticsearch, Meilisearch (for fast lead lookups).

Posted 4 days ago

Apply

4.0 - 6.0 years

0 Lacs

India

Remote

Location : Remote Experience : 4-6 years Position : Gen-AI Developer (Hands-on) Technical Requirements: Hands-on Data Science , Agentic AI, AI/Gen AI / ML /NLP Azure services (App Services, Containers, AI Foundry, AI Search, Bot Services) Experience in C# Semantic Kernel Strong background in working with LLMs and building Gen AI applications AI agent concepts .NET Aspire End-to-end environment setup for ML/LLM/Agentic AI (Dev/Prod/Test) Machine Learning & LLM deployment and development Model training, fine-tuning, and deployment Kubernetes, Docker, Serverless architecture Infrastructure as Code (Terraform, Azure Resource Manager) Performance Optimization & Cost Management Cloud cost management & resource optimization, auto-scaling Cost efficiency strategies for cloud resources MLOps frameworks (Kubeflow, MLflow, TFX) Large language model fine-tuning and optimization Data pipelines (Apache Airflow, Kafka, Azure Data Factory) Data storage (SQL/NoSQL, Data Lakes, Data Warehouses) Data processing and ETL workflows Cloud security practices (VPCs, firewalls, IAM) Secure cloud architecture and data privacy CI/CD pipelines (Azure DevOps, GitHub Actions, Jenkins) Automated testing and deployment for ML models Agile methodologies (Scrum, Kanban) Cross-functional team collaboration and sprint management Experience with model fine-tuning and infrastructure setup for local LLMs Custom model training and deployment pipeline design Good communication skills (written and oral)

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM

Posted 4 days ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description: Automation Tester: As a Test Leads / Architect who is passionate about seeing customer succeeded and ensuring best-in-class product quality. The objective of the role is End-to-End ownership of testing of the product and be the custodian of product quality. With the Digital Engineering team, you will have the opportunity to join a fast-growing team that is embarking on a multi-year implementation as part of an on-going digital modernization effort. As the project team ramps up, you will have the chance to help define and shape the vision of how the solution will be maintained and monitored to meet the business’ needs. Experience Level: 5+ Roles and Responsibilities: Experience in Telecom Industry Application is MUST Experience in testing CRM, Web Application, Billing & Order Management Have deep experience of system level debugging (including customer issues) with good understanding of managing and triaging production level issues Should have experience in database query language such as SQL, so that they can query and validate the production data for analysis. Exposer to Database / ETL Testing. Should have understanding for Microsoft Azure Cloud Environment. Responsible for maintaining QE health through facilitated defect management using standardized triage, defect management, and communication processes. Monitor trends and improvement opportunities using Mean Time to Resolution, defect RCAs, and environment downtime as key performance indicators. Conduct daily defect review with key program stakeholders, including dev, test, product, and support teams to ensure the path forward and reduce defect burn down for all prioritized defects. Monitor and ensure that overall defect processes are aligned with a standard one defect process across all test phases and teams. Provide Test Estimation to leads for Intake planning Should have a good understanding of API testing and be able to perform API testing using relevant tools like Postman, REST Assured or SoapUI Experience in Test/Defect Management tool (Preferably in JIRA/Zephyr) Partners with other leads & Architect for test planning, Assignment & reporting Monitor chats and host working sessions with impacted teams and ensure there is a path forward and active investigation on defects until disposition. Escalation Point for Dispute defects & resolution Conduct root cause analysis for defects and ensure the defect is closed under the right root cause category. Mitigate impediments and fosters a work environment for high-performing team dynamics, continuous the team’s workflow, and relentless improvements Responsible for designing holistic test architecture and Test solutions in alignment with business requirements and solution specifications Works with for Test Data team for Test Data Requirement & Fulfillment Primary / Mandatory skills: 5+ years’ experience in Product Testing with minimum 3+ years of experience on Defect Management/Production Validation Testing. Proven experience in Testing/defect management and triaging in a fast-paced software development environment. Experience in using defect tracking tools like JIRA and creating reports using Power BI. Experience of system level debugging (including customer issues) with good understanding of managing and triaging production level issues Should be familiar with database query language such as SQL, so that they can query and validate the production data for analysis Should have exposure to Test Automation (Selenium) Should have a good understanding of API testing and be able to perform API testing using relevant tools like Postman, REST Assured or SoapUI Experience in Test/Defect Management tool (Preferably in JIRA/Zephyr) Expertise in preparing daily status reports/dashboards to Management Decision maker in Entry and Exit of internal /development Testing stages Should be able to Co-ordinate proactively with Testing Team, Dev Leads & Other Members Expertise in Risk Identification & analysis Proven expertise in Agile software development especially Scrum and Kanban Experience in Telecom Industry is added advantage Strong written and verbal communication skills Technical Skills: Selenium , JAVA, JavaScript, JMeter, Rest Assured, SQL, Maven, Eclipse/VS Code, Bitbucket, JIRA, Jenkins, Git/GitHub, DevOps, Postman Additional information (if any): Willing to work in Shift Duties, Willingness to learn is very important as AT&T offers excellent environment to learn Digital Transformation skills such as cloud etc. Weekly Hours: 40 Time Type: Regular Location: Bangalore, Karnataka, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

Posted 4 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Associate Data Engineer Bangalore, India Information Technology 317963 Job Description About The Role: Grade Level (for internal use): 08 Job Title: Associate Data Engineer Location: Bangalore (Hybrid) The Team: The Automotive Insights - Supply Chain and Technology and IMR department at S&P Global is dedicated to delivering critical intelligence and comprehensive analysis of the automotive industry's supply chain and technology. Our team provides actionable insights and data-driven solutions that empower clients to navigate the complexities of the automotive ecosystem, from manufacturing and logistics to technological innovations and market dynamics. We collaborate closely with industry stakeholders to ensure our research supports strategic decision-making and drives growth within the automotive sector. Join us to be at the forefront of transforming the automotive landscape with cutting-edge insights and expertise. Responsibilities and Impact: Develop and maintain automated data pipelines to extract, transform, and load data from diverse online sources, ensuring high data quality. Build, optimize, and document web scraping tools using Python and related libraries to support ongoing research and analytics. Implement DevOps practices for deploying, monitoring, and maintaining machine learning workflows in production environments. Collaborate with data scientists and analysts to deliver reliable, well-structured data for analytics and modeling. Perform data quality checks, troubleshoot pipeline issues, and ensure alignment with internal taxonomies and standards. Stay current with advancements in data engineering, DevOps, and web scraping technologies, contributing to team knowledge and best practices. What We’re Looking For: Basic Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. 1 to 3 years of hands-on experience in data engineering, including web scraping and ETL pipeline development using Python. Proficiency with Python programming and libraries such as Pandas, BeautifulSoup, Selenium, or Scrapy. Exposure to implementing and maintaining DevOps workflows, including model deployment and monitoring. Familiarity with containerization technologies (e.g., Docker) and CI/CD pipelines for data and ML workflows. Familiarity with the cloud platforms (preferably AWS). Key Soft Skills: Strong analytical and problem-solving skills, with attention to detail. Excellent communication and collaboration abilities for effective teamwork. Ability to work independently and manage multiple priorities. Curiosity and a proactive approach to learning and applying new technologies. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH203 - Entry Professional (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317963 Posted On: 2025-08-01 Location: Bangalore, Karnataka, India

Posted 4 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About the Role: Grade Level (for internal use): 08 Job Title: Associate Data Engineer Location: Bangalore (Hybrid) The Team: The Automotive Insights - Supply Chain and Technology and IMR department at S&P Global is dedicated to delivering critical intelligence and comprehensive analysis of the automotive industry's supply chain and technology. Our team provides actionable insights and data-driven solutions that empower clients to navigate the complexities of the automotive ecosystem, from manufacturing and logistics to technological innovations and market dynamics. We collaborate closely with industry stakeholders to ensure our research supports strategic decision-making and drives growth within the automotive sector. Join us to be at the forefront of transforming the automotive landscape with cutting-edge insights and expertise. Responsibilities and Impact: Develop and maintain automated data pipelines to extract, transform, and load data from diverse online sources, ensuring high data quality. Build, optimize, and document web scraping tools using Python and related libraries to support ongoing research and analytics. Implement DevOps practices for deploying, monitoring, and maintaining machine learning workflows in production environments. Collaborate with data scientists and analysts to deliver reliable, well-structured data for analytics and modeling. Perform data quality checks, troubleshoot pipeline issues, and ensure alignment with internal taxonomies and standards. Stay current with advancements in data engineering, DevOps, and web scraping technologies, contributing to team knowledge and best practices. What We’re Looking For: Basic Required Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. 1 to 3 years of hands-on experience in data engineering, including web scraping and ETL pipeline development using Python. Proficiency with Python programming and libraries such as Pandas, BeautifulSoup, Selenium, or Scrapy. Exposure to implementing and maintaining DevOps workflows, including model deployment and monitoring. Familiarity with containerization technologies (e.g., Docker) and CI/CD pipelines for data and ML workflows. Familiarity with the cloud platforms (preferably AWS). Key Soft Skills: Strong analytical and problem-solving skills, with attention to detail. Excellent communication and collaboration abilities for effective teamwork. Ability to work independently and manage multiple priorities. Curiosity and a proactive approach to learning and applying new technologies. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH203 - Entry Professional (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 317963 Posted On: 2025-08-01 Location: Bangalore, Karnataka, India

Posted 4 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location: Bengaluru, KA, IN Company: ExxonMobil About us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . ExxonMobil’s affiliates in India ExxonMobil’s affiliates have offices in India in Bengaluru, Mumbai and the National Capital Region. ExxonMobil’s affiliates in India supporting the Product Solutions business engage in the marketing, sales and distribution of performance as well as specialty products across chemicals and lubricants businesses. The India planning teams are also embedded with global business units for business planning and analytics. ExxonMobil’s LNG affiliate in India supporting the upstream business provides consultant services for other ExxonMobil upstream affiliates and conducts LNG market-development activities. The Global Business Center - Technology Center provides a range of technical and business support services for ExxonMobil’s operations around the globe. ExxonMobil strives to make a positive contribution to the communities where we operate and its affiliates support a range of education, health and community-building programs in India. Read more about our Corporate Responsibility Framework. To know more about ExxonMobil in India, visit ExxonMobil India and the Energy Factor India. What role you will play in our team UDO DPF Data Engineer will lean in and own the work, connect with others, be resourceful, engage in data communities, apply technical growth, bring enthusiasm and commitment. They will be essential part of a data squad, a small composition of data gurus specifically assigned to a capability, developing domain knowledge and understanding of the data within business workflows so that each and every data product is done right and delights the customer. City: Bengaluru, Karnataka What you will do Perform ETL, ELT operations and administration using modern tools, programming languages and systems securely and in accordance with enterprise data standards Assemble, model, transform large complex sets of data that meet non-functional and functional business requirements into a format that can be analyzed Automate data processing of data from multiple data sources Develop, deploy and version control code for data consumption, reuse for APIs Employ machine learning techniques to create and sustain data structures Perform root cause analysis on external and internal processes and data to identify opportunities for improvement, resolve data quality issues Lead data-related workshops with stakeholders to capture data requirements and acceptance criteria About You Skills and Qualifications Minimum bachelor’s degree in: Data Science, Business Intelligence, Statistics, Computer Engineering or related field, or the equivalent combination of education, professional training, and work experience Min 2 years’ experience performing duties related to data engineering Advance English level Expert proficiency in at least one of these programming languages: Python, NoSQL, SQL, R, and competent in source code management Build processes supporting data transformation, data structures, metadata, dependency, and workload management Create data validation methods and data analysis tools Preferred Qualifications/ Experience Excellent problem-solving skills and ability to learn through scattered resources Automate routine tasks via scripts, code Capacity to successfully manage a pipeline of duties with minimal supervision Experience supporting and working with cross-functional teams in a dynamic environment Modify existing reports, extracts, dashboards, and cubes as necessary Commitment to operations integrity and ability to hold self and others accountable for results Data Governance skills: Data Quality Management, Metadata Management, Data Lineage & Provenance, Master Data Management (MDM), Data Cataloging Tools Experience with tools like Collibra, Alation, Azure Purview, Informatica, or Google. Data Catalog, Data Classification & Tagging Your benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. Stay connected with us Learn more about ExxonMobil in India, visit ExxonMobil India and Energy Factor India . Follow us on LinkedIn and Instagram Like us on Facebook Subscribe our channel at YouTube EEO Statement ExxonMobil is an Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Job Segment: Database, Sustainability, Business Intelligence, CSR, Engineer, Technology, Energy, Management, Engineering

Posted 4 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

About us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . ExxonMobil’s affiliates in India ExxonMobil’s affiliates have offices in India in Bengaluru, Mumbai and the National Capital Region. ExxonMobil’s affiliates in India supporting the Product Solutions business engage in the marketing, sales and distribution of performance as well as specialty products across chemicals and lubricants businesses. The India planning teams are also embedded with global business units for business planning and analytics. ExxonMobil’s LNG affiliate in India supporting the upstream business provides consultant services for other ExxonMobil upstream affiliates and conducts LNG market-development activities. The Global Business Center - Technology Center provides a range of technical and business support services for ExxonMobil’s operations around the globe. ExxonMobil strives to make a positive contribution to the communities where we operate and its affiliates support a range of education, health and community-building programs in India. Read more about our Corporate Responsibility Framework. To know more about ExxonMobil in India, visit ExxonMobil India and the Energy Factor India. What role you will play in our team UDO DPF Data Engineer will lean in and own the work, connect with others, be resourceful, engage in data communities, apply technical growth, bring enthusiasm and commitment. They will be essential part of a data squad, a small composition of data gurus specifically assigned to a capability, developing domain knowledge and understanding of the data within business workflows so that each and every data product is done right and delights the customer. City: Bengaluru, Karnataka What you will do Perform ETL, ELT operations and administration using modern tools, programming languages and systems securely and in accordance with enterprise data standards Assemble, model, transform large complex sets of data that meet non-functional and functional business requirements into a format that can be analyzed Automate data processing of data from multiple data sources Develop, deploy and version control code for data consumption, reuse for APIs Employ machine learning techniques to create and sustain data structures Perform root cause analysis on external and internal processes and data to identify opportunities for improvement, resolve data quality issues Lead data-related workshops with stakeholders to capture data requirements and acceptance criteria About You Skills and Qualifications Minimum bachelor’s degree in: Data Science, Business Intelligence, Statistics, Computer Engineering or related field, or the equivalent combination of education, professional training, and work experience Min 2 years’ experience performing duties related to data engineering Advance English level Expert proficiency in at least one of these programming languages: Python, NoSQL, SQL, R, and competent in source code management Build processes supporting data transformation, data structures, metadata, dependency, and workload management Create data validation methods and data analysis tools Preferred Qualifications/ Experience Excellent problem-solving skills and ability to learn through scattered resources Automate routine tasks via scripts, code Capacity to successfully manage a pipeline of duties with minimal supervision Experience supporting and working with cross-functional teams in a dynamic environment Modify existing reports, extracts, dashboards, and cubes as necessary Commitment to operations integrity and ability to hold self and others accountable for results Data Governance skills: Data Quality Management, Metadata Management, Data Lineage & Provenance, Master Data Management (MDM), Data Cataloging Tools Experience with tools like Collibra, Alation, Azure Purview, Informatica, or Google. Data Catalog, Data Classification & Tagging Your benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. Stay connected with us Learn more about ExxonMobil in India, visit ExxonMobil India and Energy Factor India . Follow us on LinkedIn and Instagram Like us on Facebook Subscribe our channel at YouTube EEO Statement ExxonMobil is an Equal Opportunity Employer: All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

General Information Req # WD00086226 Career area: Data Management and Analytics Country/Region: India State: Karnataka City: BANGALORE Date: Friday, August 1, 2025 Working time: Full-time Additional Locations : India - Karnātaka - Bangalore India - Karnātaka - BANGALORE Why Work at Lenovo We are Lenovo. We do what we say. We own what we do. We WOW our customers. Lenovo is a US$57 billion revenue global technology powerhouse, ranked #248 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com, and read about the latest news via our StoryHub. Description and Requirements BS/BA in Computer Science, Mathematics, Statistics, MIS, or related At least 5 years' experience in the data warehouse space. At least 5 years' experience in custom ETL/ELT design, implementation and maintenance. At least 5 years' experience in writing SQL statements. At least 3 years' experience with Cloud based data platform technologies such as Google Big Query, or Azure/Snowflake data platform equivalent. Ability in managing and communicating data warehouse plans to internal clients. Additional Locations : India - Karnātaka - Bangalore India - Karnātaka - BANGALORE India India - Karnātaka * India - Karnātaka - Bangalore , * India - Karnātaka - BANGALORE NOTICE FOR PUBLIC At Lenovo, we follow strict policies and legal compliance for our recruitment process, which includes role alignment, employment terms discussion, final selection and offer approval, and recording transactions in our internal system. Interviews may be conducted via audio, video, or in-person depending on the role, and you will always meet with an official Lenovo representative. Please beware of fraudulent recruiters posing as Lenovo representatives. They may request cash deposits or personal information. Always apply through official Lenovo channels and never share sensitive information. Lenovo does not solicit money or sensitive information from applicants and will not request payments for training or equipment. Kindly verify job offers through the official Lenovo careers page or contact IndiaTA@lenovo.com. Stay informed and cautious to protect yourself from recruitment fraud. Report any suspicious activity to local authorities.

Posted 4 days ago

Apply

0.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Category: Software Development/ Engineering Main location: India, Tamil Nadu, Chennai Position ID: J0625-1610 Employment Type: Full Time Position Description: Position Description Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Senior Software Engineer Experience: 3-6 years Category: Software Development/ Engineering Location: Bangalore/Hyderabad/Chennai/Pune/Mumbai Shift Timing-General Shift Position ID: J0625-1610 Employment Type: Full Time Education Qualification: Bachelor’s degree in computer science or related field or higher with minimum 4 years of relevant experience. We are seeking a Senior Java Developer passionate about delivering high-quality software solutions. The ideal candidate will have strong Java expertise, experience in designing and developing scalable applications, and the ability to mentor junior developers. Your future duties and responsibilities: Evaluate and select the appropriate version stack for each software release. Upgrade all third-party software used in ERP applications. Fix any defects related to stack upgrades and security issues. Develop, test, and maintain Java-based applications. Utilize Java design patterns (DTO, DAO) for efficient coding. Perform unit and integration testing to ensure code quality. Work with Agile methodologies and CI/CD pipelines. Collaborate with cross-functional teams, including UI/UX designers. Use GitLab for code versioning and JIRA for project tracking. Guide and mentor junior developers. Troubleshoot, optimize performance, and ensure best coding practices. Required qualifications to be successful in this role: Must to have Skills-Java , Springboot, Database -SQL/ oracle, REST API Good to have Skills-Microservices, Java-JDK,JBOSS, ETL Replication, Debezium, Business Objects, Pentaho. Skills: Java MS SQL Server Oracle Spring Boot What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM

Posted 4 days ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Data Analyst - AI& Bedrock Experience Required: 6-10yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract to Hire Job Description:- FAS - Data Analyst - AI & Bedrock Specialization About Us: We are seeking a highly experienced and visionary Data Analyst with a deep understanding of artificial intelligence (AI) principles and hands-on expertise with cutting-edge tools like Amazon Bedrock. This role is pivotal in transforming complex datasets into actionable insights, enabling data-driven innovation across our organization. Role Summary: The Lead Data Analyst, AI & Bedrock Specialization, will be responsible for spearheading advanced data analytics initiatives, leveraging AI and generative AI capabilities, particularly with Amazon Bedrock. With 5+ years of experience, you will lead the design, development, and implementation of sophisticated analytical models, provide strategic insights to stakeholders, and mentor a team of data professionals. This role requires a blend of strong technical skills, business acumen, and a passion for pushing the boundaries of data analysis with AI. Key Responsibilities: • Strategic Data Analysis & Insight Generation: o End-to-end data analysis projects, from defining business problems to delivering actionable insights that influence strategic decisions. o Utilize advanced statistical methods, machine learning techniques, and AI-driven approaches to uncover complex patterns and trends in large, diverse datasets. o Develop and maintain comprehensive dashboards and reports, translating complex data into clear, compelling visualizations and narratives for executive and functional teams. • AI/ML & Generative AI Implementation (Bedrock Focus): o Implement data analytical solutions leveraging Amazon Bedrock, including selecting appropriate foundation models (e.g., Amazon Titan, Anthropic Claude) for specific use cases (text generation, summarization, complex data analysis). o Design and optimize prompts for Large Language Models (LLMs) to extract meaningful insights from unstructured and semi-structured data within Bedrock. o Explore and integrate other AI/ML services (e.g., Amazon SageMaker, Amazon Q) to enhance data processing, analysis, and automation workflows. o Contribute to the development of AI-powered agents and intelligent systems for automated data analysis and anomaly detection. • Data Governance & Quality Assurance: o Ensure the accuracy, integrity, and reliability of data used for analysis. o Develop and implement robust data cleaning, validation, and transformation processes. o Establish best practices for data management, security, and governance in collaboration with data engineering teams. • Technical Leadership & Mentorship: o Evaluate and recommend new data tools, technologies, and methodologies to enhance analytical capabilities. o Collaborate with cross-functional teams, including product, engineering, and business units, to understand requirements and deliver data-driven solutions. • Research & Innovation: o Stay abreast of the latest advancements in AI, machine learning, and data analytics trends, particularly concerning generative AI and cloud-based AI services. o Proactively identify opportunities to apply emerging technologies to solve complex business challenges. Required Skills & Qualifications: • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field. • 5+ years of progressive experience as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong portfolio of successful data-driven projects. • Proven hands-on experience with AI/ML concepts and tools, with a specific focus on Generative AI and Large Language Models (LLMs). • Demonstrable experience with Amazon Bedrock is essential, including knowledge of its foundation models, prompt engineering, and ability to build AI-powered applications. • Expert-level proficiency in SQL for data extraction and manipulation from various databases (relational, NoSQL). • Advanced proficiency in Python (Pandas, NumPy, Scikit-learn, etc.) or R for data analysis, statistical modeling, and scripting. • Strong experience with data visualization tools such as Tableau, Power BI, Qlik Sense, or similar, with a focus on creating insightful and interactive dashboards. • Experience with cloud platforms (AWS preferred) and related data services (e.g., S3, Redshift, Glue, Athena). • Excellent analytical, problem-solving, and critical thinking skills. • Strong communication and presentation skills, with the ability to convey complex technical findings to non-technical stakeholders. • Ability to work independently and collaboratively in a fast-paced, evolving environment. Preferred Qualifications: • Experience with other generative AI frameworks or platforms (e.g., OpenAI, Google Cloud AI). • Familiarity with data warehousing concepts and ETL/ELT processes. • Knowledge of big data technologies (e.g., Spark, Hadoop). • Experience with MLOps practices for deploying and managing AI/ML models. Learn about building AI agents with Bedrock and Knowledge Bases to understand how these tools revolutionize data analysis and customer service.

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Professional in Data Architecture at Fiserv, you will have the opportunity to showcase your expertise in ETL and BI tools such as SSIS, SSRS, Power BI, and more. Your role will involve being an individual contributor on the technical front, requiring excellent communication skills and the flexibility to travel onsite for short-term assignments when necessary. A strong background in ETL development spanning 3-5 years, including hands-on experience in migration or data warehousing projects, is essential. To excel in this role, you should possess a solid understanding of database fundamentals, proficiency in writing SQL commands, queries, and stored procedures, as well as familiarity with ETL tools like SSIS, Informatica, and data warehousing concepts. Your ability to write macros and handle clients effectively, preferably with onsite experience, will be a valuable asset in this position. If you are considering a career at Fiserv, we encourage you to apply using your legal name and complete the step-by-step profile while attaching your resume. Our dedication to Diversity and Inclusion is a core value that we uphold in our workplace. We want to caution against fraudulent job postings that are not associated with Fiserv. Please be vigilant and avoid providing any personal information or financial details to unauthorized sources claiming to represent Fiserv. We do not accept resumes from agencies outside of our existing agreements, and any legitimate communication from Fiserv will originate from a verified Fiserv email address.,

Posted 4 days ago

Apply

6.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Job Description Were looking for a Data Engineer to join a fast-growing US D2C subscription brand, a PortCo of our client. This is a hands-on role ideal for someone who thrives in a lean, high-performance environment and wants to build data infrastructure that directly drives business decisions. Experience : 4 - 6 years in data engineering Industry : D2C / E-commerce / Subscription businesses Key Responsibilities Manage and optimize Snowflake data warehouse Build and maintain Fivetran pipelines and API integrations Design and manage ETL/ELT workflows Deliver clean, flat datasets for use in BI tools like Sigma Create scalable schemas and document data processes Tech & Tools SQL, Python Snowflake, Fivetran dbt or similar frameworks Sigma, Looker, Mode (analytics/visualization experience preferred) Strong understanding of data cost optimization Experience working in lean, agile teams (ref:hirist.tech)

Posted 5 days ago

Apply

7.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Job Overview We are seeking a highly skilled and experienced Lead Data Engineer AWS to spearhead the design, development, and optimization of our cloud-based data infrastructure. As a technical leader, you will drive scalable data solutions using AWS services and modern data engineering tools, ensuring robust data pipelines and architectures for real-time and batch data processing. Responsibilities The ideal candidate is a hands-on technologist with a deep understanding of distributed data systems, cloud-native data services, and team leadership in Agile Responsibilities : Design, build, and maintain scalable, fault-tolerant, and secure data pipelines using AWS-native services (e.g., Glue, EMR, Lambda, S3, Redshift, Athena, Kinesis). Lead end-to-end implementation of data architecture strategies including ingestion, storage, transformation, and data governance. Collaborate with data scientists, analysts, and application developers to understand data requirements and deliver optimal solutions. Ensure best practices for data quality, data cataloging, lineage tracking, and metadata management using tools like AWS Glue Data Catalog or Apache Atlas. Optimize data pipelines for performance, scalability, and cost-efficiency across structured and unstructured data sources. Mentor and lead a team of data engineers, providing technical guidance, code reviews, and architecture recommendations. Implement data modeling techniques (OLTP/OLAP), partitioning strategies, and data warehousing best practices. Maintain CI/CD pipelines for data infrastructure using tools such as AWS CodePipeline, Git, and Monitor production systems and lead incident response and root cause analysis for data infrastructure issues. Drive innovation by evaluating emerging technologies and proposing improvements to existing data platform Skills & Qualifications : Minimum 7 years of experience in data engineering with at least 3+ years in a lead or senior engineering role. Strong hands-on experience with AWS data services: S3, Redshift, Glue, Lambda, EMR, Athena, Kinesis, RDS, DynamoDB. Advanced proficiency in Python/Scala/Java for ETL development and data transformation logic. Deep understanding of distributed data processing frameworks (e.g., Apache Spark, Hadoop). Solid grasp of SQL and experience with performance tuning in large-scale environments. Experience implementing data lakes, lakehouse architecture, and data warehousing solutions on cloud. Knowledge of streaming data pipelines using Kafka, Kinesis, or AWS MSK. Proficiency with infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Experience with DevOps practices and tools such as Docker, Git, Jenkins, and monitoring tools (CloudWatch, Prometheus, Grafana). Expertise in data governance, security, and compliance in cloud environments (ref:hirist.tech)

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies