Jobs
Interviews

259 Data Pipelines Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

6 - 8 Lacs

Kolkata, West Bengal, India

On-site

Job Summary: 6+ years experience Work and collaborate with data science and engineering teams to deploy and scale models and algorithms. Operationalize complex machine learning models into production including end to end deployment. Understand standard Machine Learning algorithms (Regression, Classification) & Natural Language processing concepts (sentiment generation, topic modeling, TFIDF) . Working knowledge of standard ML packages like scikit learn, vader sentiment, pandas, pyspark. Design, Develop and maintain adaptable data pipelines to maintain use case specific data. Integrate ML use cases in business pipelines & work closely with upstream & downstream teams to ensure smooth handshake of information. Develop & maintain pipelines to generate & publish model performance metrics that can be utilized by Model owners for Model Risk Oversight's model review cadence. Support the operationalized models and develop runbooks for maintenance. Flexibility to work from the ODC / office all days a week.

Posted 19 hours ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

Join the Agentforce team in AI Cloud at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organizations. We work in a highly collaborative environment, and you will partner with a highly cross-functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our cutting edge new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test, and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a subject matter expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and interface with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 12+ years of experience in building highly scalable Software-as-a-Service applications/platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object-oriented programming and experience with at least one object-oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management, and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS SageMaker, Terraform, Spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem-solving skills,

Posted 1 day ago

Apply

15.0 - 19.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Lead / Data Architect, you will play a crucial role in our organization by leveraging your expertise in modern data architectures, cloud platforms, and analytics technologies. In this leadership position, you will be responsible for designing robust data solutions, guiding engineering teams, and ensuring successful project execution in collaboration with the project manager. Your key responsibilities will include architecting and designing end-to-end data solutions across multi-cloud environments such as AWS, Azure, and GCP. You will lead and mentor a team of data engineers, BI developers, and analysts to deliver on complex project deliverables. Additionally, you will define and enforce best practices in data engineering, data warehousing, and business intelligence. You will design scalable data pipelines using tools like Snowflake, dbt, Apache Spark, and Airflow, and act as a technical liaison with clients, providing strategic recommendations and maintaining strong relationships. To be successful in this role, you should have at least 15 years of experience in IT with a focus on data architecture, engineering, and cloud-based analytics. You must have expertise in multi-cloud environments and cloud-native technologies, along with deep knowledge of Snowflake, Data Warehousing, ETL/ELT pipelines, and BI platforms. Strong leadership and mentoring skills are essential, as well as excellent communication and interpersonal abilities to engage with both technical and non-technical stakeholders. In addition to the required qualifications, certifications in major cloud platforms and experience in enterprise data governance, security, and compliance are preferred. Familiarity with AI/ML pipeline integration would be a plus. We offer a collaborative work environment, opportunities to work with cutting-edge technologies and global clients, competitive salary and benefits, and continuous learning and professional development opportunities. Join us in driving innovation and excellence in data architecture and analytics.,

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our Client is looking for a Data and Backend Engineer whos excited about building scalable, data-driven products and thrives in a high-ownership, fast-paced environment. If you enjoy working across the modern data stack - from ingestion to delivery - writing clean, efficient code, and collaborating to solve real-world business problems, this role is for you. Responsibilities: Own and enhance the full lifecycle of our data pipelines - from ingestion to transformation to delivery Design, implement, and scale robust data infrastructure in a modern cloud environment Write high-quality, production-grade code using Python or similar scripting languages Participate in technical architecture reviews and make thoughtful trade-offs Work closely with product and engineering teams to develop data-powered solutions (Bonus) Build and maintain web scraping systems for custom data requirements Qualifications and requirements: 3+ years of hands-on experience with data-intensive product development Strong SQL skills, including experience with query optimization Proficiency in at least one scripting language (e.g., Python, Go) Experience working with cloud platforms like AWS, GCP, or Azure Proven track record of building and maintaining data pipelines end-to-end Familiarity with scraping frameworks and tools Self-starter with a proactive, ownership-driven mindset Strong communication skills and the ability to work cross-functionally Comfortable operating in a high-growth, fast-evolving startup environment Show more Show less

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hello, Greetings from ZettaMine!! Hiring For Data engineer Exp: 6 to 10 Years Location: Bangalore Looking for immediate joiners only Job description. Skills Required: Python, SQL, ETL Azure Data Services (ADF, Synapse), Databricks Data Pipelines, Ingestion (Batch & Streaming), Data Migration Strong in data modelling, transformation, and performance tuning Interested candidates can share updated cv on [HIDDEN TEXT] Thanks & Regards Afreen Show more Show less

Posted 1 day ago

Apply

8.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role: DevOps Open-AI Azure Location: Bangalore, Chennai, Pune, Hyderabad. Work mode: Hybrid Exp: 9+ Required Qualifications: Bachelors degree in Computer Science, Engineering, or a related technical field (or equivalent experience). 8+ years of experience in DevOps, SRE, or Cloud Engineering roles. Strong expertise with Azure cloud services and automation tools. Proficient in Infrastructure as Code (Terraform, Bicep, ARM). Deep understanding of CI/CD tools and methodologies. Experience managing data pipelines and distributed systems in production environments. Familiarity with AI/ML workflows, including vector databases and LLM APIs. Proficient in scripting languages such as Python, Bash, or PowerShell. if you are interested, please share updated resume to [HIDDEN TEXT] Show more Show less

Posted 1 day ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Bachelors degree in computer science or data analytics 2+ years of professional software development experience Comfortable in a collaborative, agile development environment Proven experience in using data to drive insights and influence business decisions. Strong expertise in Python, particularly for solving data analytics-related challenges. Hands-on experience with data visualization tools and techniques (e.g Matplotlib, Tableau, PowerBI, or similar). Solid understanding of data pipelines, analysis workflows, and process automation. Strong problem-solving skills with an ability to work in ambiguous, fast-paced environments Responsibilities Design, develop, and maintain data analytics tooling to monitor, analyze, and improve system performance and stability. Use data to extract meaningful insights and translate them into actionable business decisions. Automate processes and workflows to enhance performance and customer experience. Collaborate with cross-functional teams (engineering, product, operations) to identify and address critical issues using data. Create intuitive and impactful data visualizations that simplify complex technical problems. Continuously evolve analytics frameworks to support real-time monitoring and predictive capabilities. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrows technology to tackle todays challenges. Weve partnered with industry-leaders in almost every sectorand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thats why were committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Were committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing [HIDDEN TEXT] or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 day ago

Apply

2.0 - 4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Bachelors degree in computer science or data analytics 2+ years of professional software development experience Comfortable in a collaborative, agile development environment Proven experience in using data to drive insights and influence business decisions. Strong expertise in Python, particularly for solving data analytics-related challenges. Hands-on experience with data visualization tools and techniques (e.g Matplotlib, Tableau, PowerBI, or similar). Solid understanding of data pipelines, analysis workflows, and process automation. Strong problem-solving skills with an ability to work in ambiguous, fast-paced environments Responsibilities Design, develop, and maintain data analytics tooling to monitor, analyze, and improve system performance and stability. Use data to extract meaningful insights and translate them into actionable business decisions. Automate processes and workflows to enhance performance and customer experience. Collaborate with cross-functional teams (engineering, product, operations) to identify and address critical issues using data. Create intuitive and impactful data visualizations that simplify complex technical problems. Continuously evolve analytics frameworks to support real-time monitoring and predictive capabilities. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrows technology to tackle todays challenges. Weve partnered with industry-leaders in almost every sectorand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thats why were committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Were committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing [HIDDEN TEXT] or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 day ago

Apply

2.0 - 6.0 years

2 - 6 Lacs

Bengaluru, Karnataka, India

On-site

Job description You will partner with teammates to create complex data processing pipelines in order to solve our clients most complex challenges You will pair to write clean and iterative code based on TDD Leverage various continuous delivery practices to deploy, support and operate data pipelines Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions Create data models and speak to the tradeoffs of different modeling approaches Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process Assure effective collaboration between Thoughtworks and the clients teams, encouraging open communication and advocating for shared outcomes Technical skills You have a good understanding of data modelling and experience with data engineering tools and platforms such as Spark (Scala) and Hadoop You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions You are comfortable taking data-driven approaches and applying data security strategy to solve business problems Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments Professional skills You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives An interest in coaching, sharing your experience and knowledge with teammates You enjoy influencing others and always advocate for technical excellence while being open to change when needed Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more Must Have Data Catalog - Collibra Catalog / Ab Initio / Apache Atlas Data Quality Framework Experience - Great Expectations / Collibra DQ / Ab Initio Data Lineage - Apache Atlas / Collibra Data Processing - Batch / Real Good To Have Data Security / Encryption / Tokenization - Apache Ranger Data Pipelines Orchestration Data Modelling

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Data Scientist, you will be responsible for analyzing complex data using statistical and machine learning models to derive actionable insights. You will use Python for data analysis, visualization, and working with various technologies such as APIs, Linux OS, databases, big data technologies, and cloud services. Additionally, you will develop innovative solutions for natural language processing and generative modeling tasks, collaborating with cross-functional teams to understand business requirements and translate them into data science solutions. You will work in an Agile framework, participating in sprint planning, daily stand-ups, and retrospectives. Furthermore, you will research, develop, and analyze computer vision algorithms in areas related to object detection, tracking, product identification and verification, and scene understanding, ensuring model robustness, generalization, accuracy, testability, and efficiency. You will also be responsible for writing product or system development code, designing and maintaining data pipelines and workflows within Azure Databricks for optimal performance and scalability, and communicating findings and insights effectively to stakeholders through reports and visualizations. To qualify for this role, you should have a Master's degree in Data Science, Statistics, Computer Science, or a related field. You should have over 5 years of proven experience in developing machine learning models, particularly for time series data within a financial context. Advanced programming skills in Python or R, with extensive experience in libraries such as Pandas, NumPy, and Scikit-learn are required. Additionally, you should have comprehensive knowledge of AI and LLM technologies, with a track record of developing applications and models. Proficiency in data visualization tools like Tableau, Power BI, or similar platforms is essential. Exceptional analytical and problem-solving abilities, coupled with meticulous attention to detail, are necessary for this role. Superior communication skills are also required to enable the clear and concise presentation of complex findings. Extensive experience in Azure Databricks for data processing, model training, and deployment is preferred, along with proficiency in Azure Data Lake and Azure SQL Database for data storage and management. Experience with Azure Machine Learning for model deployment and monitoring, as well as an in-depth understanding of Azure services and tools for data integration and orchestration, will be beneficial for this position.,

Posted 1 day ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

Job Summary: As a Data Engineer at WNS (Holdings) Limited, your primary responsibility will be handling complex data tasks with a focus on data transformation and querying. You must possess a strong proficiency in advanced SQL techniques and a deep understanding of database structures. Your role will involve extracting and analyzing raw data to support the Reporting and Data Science team, providing both qualitative and quantitative insights to meet the business requirements. Responsibilities: - Design, develop, and maintain data transformation processes using SQL. - Manage complex data tasks related to data processing and querying. - Collaborate with the Data team to comprehend data requirements and efficiently transform data in SQL environments. - Construct and optimize data pipelines to ensure smooth data flow and transformation. - Uphold data quality and integrity throughout the data transformation process. - Address and resolve data issues promptly as they arise. - Document data processes, workflows, and transformation logic. - Engage with clients to identify reporting needs and leverage visualization experience to propose optimal solutions. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 2 years of experience in a Data Engineering role. - Profound proficiency in SQL, including advanced techniques for data manipulation and querying. - Hands-on experience with Power BI data models and DAX commands.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a dynamic global technology company, Schaeffler's success stems from its entrepreneurial spirit and long history of private ownership. Partnering with major automobile manufacturers, as well as key players in the aerospace and industrial sectors, we offer numerous development opportunities globally. Your key responsibilities include developing data pipelines and utilizing methods and tools to collect, store, process, and analyze complex data sets for assigned operations or functions. You will design, govern, build, and operate solutions for large-scale data architectures and applications across businesses and functions. Additionally, you will manage and work hands-on with big data tools and frameworks, implement ETL tools and processes, data virtualization, and federation services. Engineering data integration pipelines and reusable data services using cross-functional data models, semantic technologies, and data integration solutions is also part of your role. You will define, implement, and apply data governance policies for all data flows of data architectures, focusing on the digital platform and data lake. Furthermore, you will define and implement policies for data ingestion, retention, lineage, access, data service API management, and usage in collaboration with data management and IT functions. To qualify for this position, you should hold a Graduate Degree in Computer Science, Applied Computer Science, or Software Engineering with 3 to 5 years of relevant experience. Emphasizing respect and valuing diverse ideas and perspectives among our global workforce is essential to us. By fostering creativity through appreciating differences, we drive innovation and contribute to sustainable value creation for our stakeholders and society as a whole. Together, we are shaping the future with innovation, offering exciting assignments and outstanding development opportunities. We eagerly anticipate your application. For technical inquiries, please contact the following email address: technical-recruiting-support-AP@schaeffler.com. For more information and to apply, visit www.schaeffler.com/careers.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

coimbatore, tamil nadu

On-site

We are looking for a highly skilled and motivated Senior Technical Analyst to become a valuable part of our team. In this role, you will need to possess a combination of business acumen, data expertise, and technical proficiency to contribute to the development of scalable data-driven products and solutions. The ideal candidate will act as a bridge between business stakeholders and the technical team, ensuring the delivery of robust, scalable, and actionable data solutions. Your key responsibilities will include analyzing and critically evaluating client-provided technical and functional requirements, collaborating with stakeholders to identify gaps and areas needing clarification, and aligning business objectives with data capabilities. Additionally, you will be expected to contribute to defining and prioritizing product features in collaboration with technical architects and cross-functional teams, conduct data validation and exploratory analysis, and develop detailed user stories and acceptance criteria to guide development teams. As a Senior Technical Analyst, you will also be responsible for conducting user acceptance testing, ensuring solutions meet performance and security requirements, and serving as the primary interface between clients, vendors, and internal teams throughout the project lifecycle. Furthermore, you will guide cross-functional teams, collaborate with onsite team members, and drive accountability to ensure deliverables meet quality standards and timelines. To be successful in this role, you should have a Bachelor's degree in computer science, information technology, business administration, or a related field, with a Master's degree preferred. You should also have 4-5 years of experience managing technology-driven projects, with at least 3 years in a Technical Business Analyst or equivalent role. Strong experience in SQL, data modeling, and data analysis, as well as hands-on knowledge of Cloud Platforms with a focus on data engineering solutions, is essential. Your familiarity with APIs, data pipelines, workflow orchestration, and automation, along with a deep understanding of Agile/Scrum methodologies and experience with Agile tools, will be beneficial. Exceptional problem-solving, critical-thinking, decision-making, communication, presentation, and stakeholder management abilities are also key skills required for this role. This is a full-time permanent position located at DGS India - Pune - Kharadi EON Free Zone under the brand Merkle. If you are looking for a challenging role where you can contribute to the development of innovative data-driven solutions, we would love to hear from you.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Pricing Revenue Growth Consultant, your primary role will be to advise on building a pricing and promotion tool for a Consumer Packaged Goods (CPG) client. This tool will encompass pricing strategies, trade promotions, and revenue growth initiatives. You will be responsible for developing analytics and machine learning models to analyze price elasticity, promotion effectiveness, and trade promotion optimization. Collaboration with CPG business, marketing, data scientists, and other teams will be essential for the successful delivery of the project and tool. Your Business Domain Skills will be crucial in this role, including expertise in Trade Promotion Management (TPM), Trade Promotion Optimization (TPO), Promotion Depth Frequency Forecasting, Price Pack Architecture, Competitive Price Tracking, Revenue Growth Management, and Financial Modeling. Additionally, you will need proficiency in AI, Machine Learning for Pricing, and Dynamic pricing implementation. Key Responsibilities: - Utilize Consulting Skills for hypothesis-driven problem solving, Go-to-Market pricing, and revenue growth execution. - Conduct Advisory Presentations and Data Storytelling. - Provide Project Leadership and Execution. In terms of Technical Requirements, you should possess: - Proficiency in programming languages such as Python and R for data manipulation and analysis. - Expertise in machine learning algorithms and statistical modeling techniques. - Familiarity with data warehousing, data pipelines, and data visualization tools like Tableau or Power BI. - Experience in Cloud platforms like ADF, Databricks, Azure, and their AI services. Your Additional Responsibilities will include: - Working collaboratively with cross-functional teams across sales, marketing, and product development. - Managing stakeholders and leading teams. - Thriving in a fast-paced environment focused on delivering timely insights to support business decisions. - Demonstrating excellent problem-solving skills and the ability to address complex technical challenges. - Communicating effectively with cross-functional teams and stakeholders. - Managing multiple projects simultaneously and prioritizing tasks based on business impact. Qualifications: - A degree in Data Science or Computer Science with a specialization in data science. - A Master's in Business Administration and Analytics is preferred. Preferred Skills: - Experience in Technology, Big Data, and Text Analytics.,

Posted 2 days ago

Apply

10.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

Job Description: We are looking for a highly skilled AI/ML Architect who possesses a strong background in Data Engineering and a demonstrated ability to utilize Design Thinking principles in addressing complex business issues. As the ideal candidate, you will have a pivotal role in shaping comprehensive AI/ML solutions, spanning from data architecture to model deployment. Your focus will be on ensuring that the design is user-centric, scalable, and in alignment with the business objectives. Key Responsibilities: - Take the lead in architecting and designing AI/ML solutions across various business domains. - Collaborate closely with stakeholders to identify use cases and transform them into scalable, production-ready ML architectures. - Employ design thinking methodologies to foster innovative and user-centric solution designs. - Design data pipelines and feature engineering processes for both structured and unstructured data. - Oversee the entire ML lifecycle, encompassing data preprocessing, model training, evaluation, deployment, and monitoring. - Uphold best practices in MLOps, which includes CI/CD for ML, model governance, and strategies for retraining. - Work collaboratively with data scientists, engineers, and product teams to ensure that the architecture aligns with business goals. - Provide mentorship and guidance to engineering and data science teams regarding solution design, performance optimization, and system integration. Experience: 10 to 15 years Job Reference Number: 13029,

Posted 2 days ago

Apply

6.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

About the Role We are looking for a highly skilled and experienced Informatica Data Management Cloud (IDMC) Architect/Tech Lead to join our dynamic team at Cittabase. As the IDMC Architect/Tech Lead, your primary responsibility will be to lead the design, implementation, and maintenance of data management solutions utilizing the Informatica Data Management Cloud platform. You will collaborate closely with cross-functional teams to create scalable and efficient data pipelines, ensure data quality and governance, and oversee the successful delivery of data projects. The ideal candidate will demonstrate advanced expertise in Informatica IDMC, possess strong leadership qualities, and have a proven track record of driving data initiatives to success. Responsibilities - Lead the design and implementation of data management solutions using Informatica Data Management Cloud. - Develop end-to-end data pipelines for data ingestion, transformation, integration, and delivery across various sources and destinations. - Work with stakeholders to gather requirements, establish data architecture strategies, and translate business needs into technical solutions. - Provide technical leadership and guidance to a team of developers, ensuring compliance with coding standards, best practices, and project timelines. - Conduct performance tuning, optimization, and troubleshooting of Informatica IDMC workflows and processes. - Stay informed about emerging trends and technologies in data management, Informatica platform updates, and industry best practices. - Serve as a subject matter expert on Informatica Data Management Cloud, engaging in solution architecture discussions, client presentations, and knowledge-sharing sessions. Qualifications - Bachelor's degree in computer science, Information Technology, or a related field. - 8-12 years of experience in IT with specialization in Data Management (DW/Data Lake/Lakehouse). - 6-10 years of experience in Informatica suite of products such as PowerCenter/Data Engineering/CDC. - Profound understanding of RDBMS/Cloud Database architecture. - Experience in implementing a minimum of two full lifecycle IDMC projects. - Strong grasp of data integration patterns and data modeling concepts. - Hands-on experience with Informatica IDMC configurations, Data modeling & Data Mappings. - Demonstrated leadership experience, with the ability to mentor team members and foster collaboration. - Excellent communication skills, enabling effective interaction with technical and non-technical stakeholders. - Capability to collaborate with PM/BA to translate requirements into a working model and work with developers to implement the same. - Preparation and presentation of solution design and architecture documents. - Knowledge of Visualization/BI tools will be an added advantage. Join us and contribute to our innovative projects by applying now to be part of our dynamic team!,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

We are looking for a highly skilled and experienced Senior Python & ML Engineer with expertise in PySpark, machine learning, and large language models (LLMs). You will play a key role in designing, developing, and implementing scalable data pipelines, machine learning models, and LLM-powered applications. In this role, you will need to have a solid understanding of Python's ecosystem, distributed computing using PySpark, and practical experience in AI optimization. Your responsibilities will include designing and maintaining robust data pipelines with PySpark, optimizing PySpark jobs for efficiency on large datasets, and ensuring data integrity throughout the pipeline. You will also be involved in developing, training, and deploying machine learning models using key ML libraries such as scikit-learn, TensorFlow, and PyTorch. Tasks will include feature engineering, model selection, hyperparameter tuning, and integrating ML models into production systems for scalability and reliability. Additionally, you will research, experiment with, and integrate state-of-the-art Large Language Models (LLMs) into applications. This will involve developing solutions that leverage LLMs for tasks like natural language understanding, text generation, summarization, and question answering. You will also fine-tune pre-trained LLMs for specific business needs and datasets, and explore techniques for prompt engineering, RAG (Retrieval Augmented Generation), and LLM evaluation. Collaboration is key in this role, as you will work closely with data scientists, engineers, and product managers to understand requirements and translate them into technical solutions. You will mentor junior team members, contribute to best practices for code quality, testing, and deployment, and stay updated on the latest advancements in Python, PySpark, ML, and LLMs. Furthermore, you will be responsible for deploying, monitoring, and maintaining models and applications in production environments using MLOps principles. Troubleshooting and resolving issues related to data pipelines, ML models, and LLM applications will also be part of your responsibilities. To be successful in this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. Strong proficiency in Python programming, PySpark, machine learning, and LLMs is essential. Experience with cloud platforms like AWS, Azure, or GCP is preferred, along with strong problem-solving, analytical, communication, and teamwork skills. Nice-to-have skills include familiarity with R and Shiny, streaming data technologies, containerization technologies, MLOps tools, graph databases, and contributions to open-source projects.,

Posted 2 days ago

Apply

11.0 - 13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Organization: At CommBank, we never lose sight of the role we play in other peoples financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Staff Software Manager (Data) Location: Bangalore Business & Team: Our Bankwest Technology division is at the heart of our digital product strategy and is responsible for the management and deployment of technology change across the organisation. Our tech teams work at pace, with autonomy and local decision making, to deploy world-class solutions in the pursuit of our business strategy. Youll be joining our Bankwest Tech Team, a critical supporting function that allows teams to deliver rapidly whilst still aligning to a strategic roadmap. As a team of trusted internal consultants, youll provide the expertise, guidance and technology governance to delivery teams to plot a pragmatic path between todays deliverables and our longer term enterprise objectives. Youll provide input into roadmaps, best practice, standards and methodologies to help ensure high quality outcomes, and oversee implementation to ensure business objectives are met. Since you bridge the gap between business problems and technology solutions, youll need to be equally comfortable engaging with senior stakeholders as with engineering teams, and excellent communication skills (in both directions) are a must. And often youll work across a range of different initiatives, so whilst youre never short of a challenge, the ability to work independently and manage your own time is important Impact & contribution: As a Staff Manager you are: Empathetic and self-aware. You think and care deeply about how you might interact with your team, stakeholders and customers. A Mentor, harboring a passion to nurture, grow and influence those around you to think differently and always maintain a growth mindset. Innovative. You continually seek to improve the status quo for our customers. You inspire your team to do the same and remain resilient through change. Promoting quality and delivering at pace through the maximization of automation is one of the key focus areas of the role. Ownership, you take responsibility for the software design, engineering processes and quality standards of yours as well as your team members as you work in a collaborative environment. Roles & responsibilities: As a Staff Software Manager, you - Work with the team to understand our customers' core business objectives and deliver quality data centric solutions within committed timeframes Contribute to thought leadership enabling analytics teams to deliver world class data centric solutions and analytics by championing sustainable and re-useable data assets Design and build group data assets by integrating diverse data from internal and external sources. Help to promote best in class coding standards and practices to ensure high quality and minimum risk. Collaborating and communicating with business and delivery stakeholders and working without supervision. Identify and escalate Technical Debt through simple processes to support effective planning and risk management. Risk Mindset All Bankwest employees are expected to proactively identify and understand, openly discuss and act on current and future risks. Able to build solutions that are fit for purpose, perform well with large data volume and complex data transformation rules, and reliable to operate Understanding of the development and release cycle following Change Management processes. Creative problem solver, with open thinking to generate and support new or better ways of doing things. Strong capability and experience with modern engineering practices and techniques Experience with Agile working practices is beneficial Prior experience working in Financial Services industry would be highly regarded, but not essential. Essential skills: 11+ years of experience in relevant field. Continuously improve the data products with best engineering solutions. Strategize, Design and Implement (hands on) highly reliable and scalable data pipelines and data platforms with comprehensive test coverage on AWS Cloud using AWS cloud native services AWS SageMaker, Redshift etc Build and implement data pipelines in distributed data platforms including warehouses, databases, data lakes and cloud Lake houses to enable data predictions and models, and reporting and visualization analysis via data integration tools and frameworks. RDBMS Experience on any prominent Database is required with strong SQL expertise. Required skills: Strategic thinking, External perspective, Product delivery Working with distributed teams, Continuous delivery and Agile practices, People leadership, Building high performing teams Development, analysis or testing Educational Qualification: Bachelors degree or Masters degree in engineering in Information Technology. If you&aposre already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you&aposll need to apply through Sidekick to submit a valid application. Were keen to support you with the next step in your career. We&aposre aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 24/07/2025 Show more Show less

Posted 2 days ago

Apply

9.0 - 11.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description Qualifications: Overall 9+ years of IT experience Minimum of 5+ years' preferred managing Data Lakehouse environments, Azure Databricks, Snowflake, DBT (Nice to have) specific experience a plus. Hands-on experience with data warehousing, data lake/lakehouse solutions, data pipelines (ELT/ETL), SQL, Spark/PySpark, DBT,. Strong understanding of Data Modelling, SDLC, Agile, and DevOps principles. Bachelors degree in management/computer information systems, computer science, accounting information systems, computer or in a relevant field. Knowledge/Skills: Tools and Technologies: Azure Databricks, Apache Spark, Python, Databricks SQL, Unity Catalog, and Delta Live Tables. Understanding of cluster configuration, compute and storage layers. Expertise with Snowflake Architecture, with experience in design, development, and evolution System integration experience, data extraction, transformation, and quality controls design techniques. Familiarity with data science concepts, as well as MDM, business intelligence, and data warehouse design and implementation techniques. Extensive experience with the medallion architecture data management framework as well as unity catalog. Data modeling and information classification expertise at the enterprise level. Understanding of metamodels, taxonomies and ontologies, as well as of the challenges of applying structured techniques (data modeling) to less-structured sources. Ability to assess rapidly changing technologies and apply them to business needs. Be able to translate the information architecture contribution to business outcomes into simple briefings for use by various data-and-analytics-related roles. About Us Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. About The Team Datavails Data Management Services: Datavails Data Management and Analytics practice is made up of experts who provide a variety of data services including initial consulting and development, designing and building complete data systems, as well as ongoing support and management of database, data warehouse, data lake, data integration, and virtualization and reporting environments. Datavails team is comprised of not just excellent BI & analytics consultants, but great people as well. Datavails data intelligence consultants are experienced, knowledgeable and certified in the best in breed BI and analytics software applications and technologies. We ascertain your business objectives, goals and requirements, assess your environment, and recommend the tools which best fit your unique situation. Our proven methodology can help your project succeed, regardless of stage. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the Data Management experts on demand you desire. Datavails flexible and client focused services always add value to your organization. Show more Show less

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Retail Sell Out Consultant, you will collaborate with CPG, FMCG businesses, data engineers, and other teams to ensure successful project delivery and tool implementation. You will need to possess a combination of business domain skills, technical expertise, and consulting skills to excel in this role. Your responsibilities will include engaging with various stakeholders (both non-technical and technical) at the client side, interpreting problem statements and use cases, and devising feasible solutions. You will be tasked with understanding different types of retail data, designing data models including Fact & Dimension table structures, and driving data load & refresh strategies. In addition, you will work on designing TradeEdge Interface specifications, collaborating with developers for data conversion, preparing calculation logics documents, and actively participating in User Acceptance Testing (UAT). Your proficiency in SQL, Power BI, data warehousing, and data pipelines will be crucial for data manipulation and analysis. Experience with data visualization tools like Tableau or Power BI, as well as cloud platform services, will also be beneficial. As a Retail Sell Out Consultant, you will be expected to demonstrate strong consulting skills such as advisory, presentation, and data storytelling. You will play a key role in project leadership and execution, working closely with Technical Architects, TradeEdge, and GCP developers throughout the project lifecycle. Your ability to work in an Agile framework and collaborate effectively with cross-functional teams will be essential. The ideal candidate for this role should hold a degree in Engineering with exposure to retail, FMCG, and supply chain management. A deep understanding of the retail domain, including POS sales, inventory management, and related experiences, will be highly valued in this position. In this role, you can expect a collaborative work environment with cross-functional teams, a strong focus on stakeholder management and team handling, and a fast-paced setting aimed at delivering timely insights to support business decisions. Your excellent problem-solving skills, effective communication abilities, and commitment to addressing complex technical challenges will be instrumental in your success as a Retail Sell Out Consultant.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

vadodara, gujarat

On-site

The role supports the integration, transformation, and delivery of data using tools within the Microsoft Fabric platform. You will collaborate with the data engineering team to provide Data and Insights solutions, ensuring the delivery of high-quality data to enable analytics capabilities within the organization. Your key responsibilities will include assisting in the development and maintenance of ETL pipelines using Azure Data Factory and other Fabric tools. You will work closely with senior engineers and analysts to gather requirements, develop prototypes, and support data integration from various sources. Additionally, you will play a role in developing and maintaining Data Warehouse schemas, contributing to documentation, and participating in testing efforts to uphold data reliability. It is crucial to learn and adhere to data standards and governance practices as directed by the team. Essential skills and experience for this role include a solid understanding of data engineering concepts and data structures, familiarity with Microsoft data tools like Azure Data Factory, OneLake, or Synapse, and knowledge of ETL processes and data pipelines. The ability to work collaboratively in an Agile/Kanban team environment is essential. Possessing a Microsoft certified Fabric DP-600 or DP-700 certification, along with any other relevant Azure Data certification, is advantageous. Desirable skills and experience include familiarity with Medallion Architecture principles, exposure to MS Purview or other data governance tools, understanding of data warehousing and reporting concepts, and an interest or background in retail data domains.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

faridabad, haryana

On-site

We are seeking a skilled QA / Data Engineer with 3-5 years of experience. As the ideal candidate, you will possess expertise in manual testing and SQL, along with knowledge in automation and performance testing. Your primary responsibility will be to ensure the quality and reliability of our data-driven applications through comprehensive testing and validation. Key Responsibilities: - Utilize extensive experience in manual testing, particularly in data-centric environments. - Demonstrate strong SQL skills for data validation, querying, and testing database functionalities. - Implement data engineering concepts, including ETL processes, data pipelines, and data warehousing. - Work with Geo-Spatial Data to enhance data quality and analysis. - Apply QA methodologies and best practices for software and data testing. - Utilize effective communication skills for seamless collaboration within the team. Desired Skills: - Experience with automation testing tools and frameworks (e.g., Selenium, JUnit) for data pipelines. - Proficiency in performance testing tools (e.g., JMeter, LoadRunner) to evaluate data systems. - Familiarity with data engineering tools and platforms (e.g., Apache Kafka, Apache Spark, Hadoop). - Understanding of cloud-based data solutions (e.g., AWS, Azure, Google Cloud) and their testing methodologies. Qualifications: - Bachelor of Engineering - Bachelor of Technology (B.E./B.Tech.) In this role, you will play a crucial part in ensuring the quality of our data-centric applications by conducting thorough testing and validation processes. Your expertise in manual testing, SQL, ETL processes, data pipelines, data warehousing, and additional skills in automation and performance testing will be key to your success. Join our team in Bengaluru/Gurugram and contribute to the reliability and efficiency of our data-driven solutions.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, you will be part of a team of innovative professionals working with cutting-edge technologies. Our purpose is anchored in bringing real positive changes in an increasingly virtual world, transcending generational gaps and future disruptions. We are currently seeking SQL Professionals for the role of Data Engineer with 4-6 years of experience. The ideal candidate must have a strong academic background. As a Data Engineer at BNY Mellon in Pune, you will be responsible for designing, developing, and maintaining scalable data pipelines and ETL processes using Apache Spark and SQL. You will collaborate with data scientists and analysts to understand data requirements, optimize and query large datasets, ensure data quality and integrity, implement data governance and security best practices, participate in code reviews, and troubleshoot data-related issues promptly. Qualifications for this role include 4-6 years of experience in data engineering, proficiency in SQL and data processing frameworks like Apache Spark, knowledge of database technologies such as SQL Server or Oracle, experience with cloud platforms like AWS, Azure, or Google Cloud, familiarity with data warehousing solutions, understanding of Python, Scala, or Java for data manipulation, excellent analytical and problem-solving skills, and good communication skills to work effectively in a team environment. Joining YASH means being empowered to shape your career in an inclusive team environment. We offer career-oriented skilling models and promote continuous learning, unlearning, and relearning at a rapid pace. Our workplace is based on four principles: flexible work arrangements, free spirit, and emotional positivity; agile self-determination, trust, transparency, and open collaboration; all support needed for the realization of business goals; and stable employment with a great atmosphere and ethical corporate culture.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Data Engineer, you will be responsible for designing, developing, and implementing data pipelines using StreamSets Data Collector. Your role involves ingesting, transforming, and delivering data from diverse sources to target systems. You will write and maintain efficient, reusable pipelines while adhering to coding standards and best practices. Additionally, developing custom processors and stages within StreamSets to address unique data integration challenges is a key aspect of your responsibilities. Ensuring data accuracy and consistency is crucial, and you will implement data validation and quality checks within StreamSets pipelines. Optimizing pipeline performance for high-volume data processing and automating deployment and monitoring using CI/CD tools are essential tasks you will perform. In terms of quality assurance and testing, you will develop comprehensive test plans and test cases to validate pipeline functionality and data integrity. Thorough testing, debugging, and troubleshooting of pipelines will be conducted to identify and resolve issues. You will also standardize quality assurance procedures for StreamSets development and perform performance testing and tuning to ensure optimal pipeline performance. When it comes to problem-solving and support, you will research and analyze complex software-related issues to provide effective solutions. Timely resolution of production issues related to StreamSets pipelines is part of your responsibility. Providing technical support and guidance to team members on StreamSets development and monitoring pipeline logs and metrics for issue identification and resolution are also key tasks. Strategic alignment and collaboration are essential aspects of the role. Understanding and aligning with departmental, segment, and organizational strategies and objectives are necessary. Collaboration with data engineers, data analysts, and stakeholders to deliver effective data solutions is crucial. Documenting pipeline designs and configurations, participating in code reviews, and contributing to the development of data integration best practices and standards are also part of your responsibilities. To qualify for this role, you should have a Bachelor's Degree in Computer Science, Information Technology, or a related field. A minimum of 3-5 years of hands-on experience in systems analysis or application programming development with a focus on data integration is required. Proven experience in developing and deploying StreamSets Data Collector pipelines, strong understanding of data integration concepts and best practices, proficiency in SQL, experience with relational databases, various data formats (JSON, XML, CSV, Avro, Parquet), cloud platforms (AWS, Azure, GCP), and cloud-based data services, as well as experience with version control systems (Git) are essential qualifications. Strong analytical and problem-solving skills, excellent communication and collaboration abilities, and the capacity to work independently are also necessary for this role.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

kolkata, west bengal

On-site

You should have an in-depth understanding of data management, including permissions, recovery, security, and monitoring. You must also possess strong experience in implementing data analysis techniques such as exploratory data profiling. Additionally, you should have a solid grasp of design patterns and hands-on experience in developing data pipelines for batch processing. Your role will require you to design and develop ETL processes that populate star schemas using various source data for data warehouse implementations supporting a product on both cloud and on-premise environments. You should be able to actively participate in the requirements gathering process and design business process dimensional models. Collaboration with data providers to address data gaps and make adjustments to source-system data structures for seamless analysis and integration with other company data will be a key responsibility. A basic understanding of scripting languages like Python is necessary for this role. Moreover, you should be skilled in both proactive and reactive performance tuning at the instance-level, database-level, and query-level to optimize data processing efficiency.,

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies