Jobs
Interviews

259 Data Pipelines Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 14.0 years

0 Lacs

thane, maharashtra

On-site

The Senior Data Solution Analyst will work within the Data & Analytics Office in Mumbai/ Thane to implement cutting-edge data solutions that align with business requirements. As part of the Data Management practices, you will be responsible for designing and building data pipelines for Data Lake and Cloud solutions. Your role will involve demonstrating a strong understanding of data management practices, data quality, and new generation technology stacks. You will also participate in data mart design and optimization exercises, design ETL data pipelines, and collaborate with business stakeholders to address data concerns. Key Responsibilities: - Demonstrate a strong understanding of data management practices, data quality, and new generation technology stacks - Participate in data mart design and optimization exercises - Design and implement ETL data pipelines and monitoring mechanisms - Analyze data concerns and collaborate with business stakeholders to address issues - Build Data mart solutions along with the team and bring data expertise to the Data Analytics office - Possess a minimum of 8 to 14 years of experience in Data Management within the Banking domain Managerial & Leadership Responsibilities: - Lead a team of 2-5 developers - Demonstrate a strong understanding of the technology stack - Possess techno-functional knowledge and delivery leadership qualities Key Success Metrics: - Successfully implement Data Solution and Management framework across business lines (Retail Banking / Cards Domain) - Deliver business problem-solving data solutions,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

delhi

On-site

We are looking for a Systems Architect (AVP level) with extensive experience in designing and scaling Generative AI solutions for production. As a Systems Architect, you will play a crucial role in collaborating with data scientists, ML engineers, and product leaders to shape enterprise-grade GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI. You will also be responsible for architecting data pipelines, training/inference workflows, and MLOps integration. It is essential to ensure that the systems you design are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should have 12-15 years of experience in cloud-native and distributed systems, with at least 2-3 years of experience in GenAI/LLMs using tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is required. Experience with technologies like RAG, semantic search, agent orchestration, and MLOps will be beneficial for this role. Strong architectural thinking and effective communication with stakeholders are essential skills. Preferred qualifications include cloud certifications, AI open-source contributions, and knowledge of security and governance principles. If you are passionate about designing cutting-edge Generative AI solutions and possess the necessary skills and experience, we encourage you to apply for this leadership role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

thane, maharashtra

On-site

As an ETL Developer at our cutting-edge FinTech and RegTech company, you will be responsible for designing, implementing, and expanding data pipelines through extraction, transformation, and loading activities. You will investigate data to identify potential issues within ETL pipelines, propose solutions, and optimize statistical efficiency and data quality by developing and implementing data collection systems. Acquiring data from primary or secondary sources, maintaining databases, and analyzing trends in complex data sets will be key aspects of your role. Working closely with management to prioritize business needs, identifying process improvement opportunities, and preparing documentation are also part of your responsibilities. Your role will also involve quality testing, data assurance, and a high attention to detail. A passion for complex data structures and problem solving is essential for success in this role. Qualifications for this position include a Bachelor's degree in computer science, electrical engineering, or information technology, along with experience in IT and working with complex data sets. Knowledge of at least one ETL tool (such as SSIS, Informatica, Talend, etc.) is required, and familiarity with HPCC Systems and C++ is preferred. Additionally, familiarity with Kafka on-premise architectures and ELK, as well as an understanding of cross cluster replication, index lifecycle management, and hot-warm architectures, will be beneficial for this role. If you are enthusiastic about leveraging AI, machine learning, and big data analytics to simplify operations in compliance, fraud detection, reconciliation, and analytics for financial institutions, and if you possess the necessary qualifications and skills, we invite you to apply for this exciting opportunity.,

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As the COE Solution Development Lead at Teradata, you will be a key thought leader responsible for overseeing the detailed design, development, and maintenance of complex data and analytic solutions. Your role will involve utilizing strong technical and project management skills, as well as team building and mentoring capabilities. You will need to have a deep understanding of Teradata's Solutions Strategy, Technology, Data Architecture, and the partner engagement model. Reporting directly to Teradata's Head of Solution COE, you will play a crucial role in leading a team that develops scalable, efficient, and innovative data and analytics solutions to address complex business problems. Your key responsibilities will include leading the end-to-end process of solution development, designing comprehensive solution architectures, ensuring the flexibility for integration of various data sources and platforms, implementing best practices in data analytics solutions, collaborating with senior leadership, and mentoring a team of professionals to foster a culture of innovation and continuous learning. Additionally, you will work towards delivering solutions on time and within budget, facilitating knowledge sharing across teams, and ensuring that data solutions are scalable, secure, and aligned with the organization's overall technological roadmap. You will collaborate with the COE Solutions lead to transform conceptual solutions into detailed designs and lead a team of Data scientists, Solution engineers, Data engineers, and Software engineers. Furthermore, you will work closely with product development, legal, IT, and business teams to ensure seamless integration of data analytics solutions and the protection of related IP. To qualify for this role, you should have a Bachelor's degree in Computer Science, Engineering, Data Science, or a related field, with a preference for an MS or MBA. You should also possess over 15 years of experience in IT, with at least 10 years in data & analytics solution development and 4+ years in a leadership or senior management position. Along with a proven track record in developing data-driven solutions, you should have experience working with cross-functional teams and a strong understanding of emerging trends in data analytics technologies. We believe you will thrive at Teradata due to our people-first culture, flexible work model, focus on well-being, and commitment to Diversity, Equity, and Inclusion. If you are a collaborative, analytical, and innovative professional with excellent communication skills and a passion for data analytics, we invite you to join us in solving business challenges and driving enterprise analytics forward.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

FICO is a leading global analytics software company, assisting businesses in 100+ countries to make informed decisions. Join our esteemed team today to unleash your career potential. As part of the product development team, you will play a crucial role in providing innovative thought leadership. This position offers you the chance to gain a profound understanding of our business, collaborate closely with product management to design, architect, and develop a highly feature-rich product. You will be reporting to the VP, Software Engineering. Your responsibilities will include designing, developing, testing, deploying, and supporting the capabilities of an enterprise-level platform. You will create scalable microservices focusing on high performance, availability, interoperability, and reliability. Additionally, you will be expected to contribute designs, technical proof of concepts, and adhere to standards set by the architecture team. Collaborating with senior engineers and product management to create epics and stories, while defining technical acceptance criteria will also be a part of your role. We are seeking candidates with a Bachelor's/Master's degree in computer science or related field, with a minimum of 7 years of experience in software architecture, design, development, and testing. It is essential to be proficient in Java (Java 17 and above), Spring, Spring Boot, Maven/Gradle, Docker, Git, and GitHub. Expertise in Data Structure, Algorithm, Multi-threading, Memory Management, and experience with data engineering services is highly desirable. The ideal candidate should possess a strong understanding of Microservices Architecture, RESTful and gRPC APIs, Cloud engineering areas like Kubernetes, and AWS/Azure/GCP. Knowledge of databases such as MySQL, PostgreSQL, MongoDB, and Cassandra is also required. Experience with Agile or Scaled Agile software development, along with excellent communication and documentation skills, is crucial. Join us at FICO to be part of a culture that reflects our core values and offers an inclusive work environment. You will have the opportunity to contribute to impactful projects, develop professionally, and be rewarded for your hard work. We provide competitive compensation, benefits, and rewards programs, as well as a people-first work environment that promotes work/life balance and employee engagement. Make a move to FICO and be a part of a leading organization in the Big Data analytics field. You will have the chance to make a difference by helping businesses leverage data to enhance decision-making. Join us in our commitment to innovation and collaboration and be part of a diverse and inclusive environment that fosters growth and development. Explore how you can fulfill your potential at www.fico.com/Careers. Please note that information submitted with your application is subject to the FICO Privacy Policy available at https://www.fico.com/en/privacy-policy.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As a skilled professional in data engineering, you will be responsible for developing and managing data pipelines to support our analytics and machine learning initiatives. Your primary focus will be on designing efficient and scalable data pipelines that ensure the smooth flow of data from various sources to our models. You will work closely with the data science team to understand their requirements and implement solutions that meet their needs. This will involve writing code to extract, transform, and load data from different sources, as well as monitoring and troubleshooting the pipelines to ensure their reliability and performance. In addition to pipeline development, you will also be involved in optimizing and maintaining existing pipelines to improve efficiency and reduce processing times. You will collaborate with cross-functional teams to identify opportunities for automation and process improvements, driving continuous enhancement of our data infrastructure. The ideal candidate for this role should have a solid background in data engineering, with experience in building and maintaining data pipelines for analytics and machine learning applications. Strong programming skills, particularly in languages like Python or SQL, are essential, along with a good understanding of data processing frameworks and tools. If you are passionate about working with data, enjoy solving complex problems, and thrive in a fast-paced environment, we would love to hear from you. Join us in our mission to leverage data-driven insights and technologies to drive business growth and innovation.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Data Engineering Lead specializing in AI for Data Quality & Analytics with 7 to 10 years of experience, you will play a crucial role in developing and maintaining high-quality data ingestion and validation processes from various upstream systems. Your responsibilities will encompass designing and implementing scalable data quality validation systems, developing AI-driven tools for anomaly detection, and leading the development of data pipelines and validation scripts. Additionally, you will collaborate with stakeholders to proactively address reporting gaps and ensure auditability of decisions derived from enriched data. Your expertise in Python, Alteryx, SQL, and cloud data platforms like Snowflake will be essential for this role, along with a deep understanding of data pipelines, ETL/ELT processes, and data validation best practices. Experience with AI/ML in data quality and familiarity with enterprise systems like Workday, Beeline, and Excel-based reporting are also required. Strong interpersonal and communication skills are necessary to collaborate effectively with executive stakeholders and distributed teams. In this position, you will have the opportunity to lead a small, distributed team, mentoring junior engineers and analysts while optimizing headcount through AI augmentation. Your leadership will enable team members to focus on higher-value initiatives and align system architecture with business needs. Preferred attributes include experience in leading data modernization or AI transformation projects and exposure to dashboard adoption challenges and enterprise change management. If you are a data engineering professional with a passion for data quality and analytics, and possess the requisite skills and experience, we invite you to send your updated resume to swetha.p@zettamine.com. Join our team in Bangalore and contribute to building AI-enhanced quality frameworks, scalable reporting solutions, and automated anomaly detection systems that drive business insights and decision-making processes.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

This is a full-time Data Engineer position with D Square Consulting Services Pvt Ltd, based in Pan-India with a hybrid work model. You should have at least 5 years of experience and be able to join immediately. As a Data Engineer, you will be responsible for designing, building, and scaling data pipelines and backend services supporting analytics and business intelligence platforms. A strong technical foundation, Python expertise, API development experience, and familiarity with containerized CI/CD-driven workflows are essential for this role. Your key responsibilities will include designing, implementing, and optimizing data pipelines and ETL workflows using Python tools, building RESTful and/or GraphQL APIs, collaborating with cross-functional teams, containerizing data services with Docker, managing deployments with Kubernetes, developing CI/CD pipelines using GitHub Actions, ensuring code quality, and optimizing data access and transformation. The required skills and qualifications for this role include a Bachelor's or Master's degree in Computer Science or a related field, 5+ years of hands-on experience in data engineering or backend development, expert-level Python skills, experience with building APIs using frameworks like FastAPI, Graphene, or Strawberry, proficiency in Docker, Kubernetes, SQL, and data modeling, good communication skills, familiarity with data orchestration tools, experience with streaming data platforms like Kafka or Spark, knowledge of data governance, security, and observability best practices, and exposure to cloud platforms like AWS, GCP, or Azure. If you are proactive, self-driven, and possess the required technical skills, then this Data Engineer position is an exciting opportunity for you to contribute to the development of cutting-edge data solutions at D Square Consulting Services Pvt Ltd.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

coimbatore, tamil nadu

On-site

The job is for an AI and Machine Learning Developer position located in Coimbatore. As an AI and Machine Learning Developer, your primary responsibilities will include designing, developing, and implementing machine learning algorithms and AI solutions. Collaboration with data scientists, data collection, analysis, training and evaluation of machine learning models, and optimizing solutions for performance and scalability are key aspects of this role. It is important to stay updated with the latest advancements in AI and machine learning and incorporate new techniques into ongoing projects. To be eligible for this role, you should have at least 4 years of experience as an AI and Machine Learning Developer, strong programming skills in Python (knowledge of JavaScript or other languages is a plus), a solid understanding of machine learning fundamentals and data pipelines, and practical experience with APIs from platforms such as OpenAI, Google Cloud, or Azure. You should be able to work effectively both independently and as part of a team in a dynamic work environment. Additional skills that would be advantageous for this role include working with LLMs (Large Language Models) and exposure to prompt engineering or generative AI.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a skilled and motivated AI Developer with over 3+ years of hands-on experience in building, deploying, and optimizing AI/ML models. Your expertise includes strong proficiency in Python, Scikit-learn, machine learning algorithms, and practical experience of Azure AI services, Azure AI foundry, Copilot Studio, and Dataverse are mandatory. You will be responsible for designing intelligent solutions using modern deep learning and neural network architectures, integrated into scalable cloud-based environments. Your key responsibilities will include utilizing Azure AI Foundry and Copilot Studio to build AI-driven solutions that can be embedded within enterprise workflows. You will design, develop, and implement AI/ML models using Python, Scikit-learn, and modern deep learning frameworks. Additionally, you will build and optimize predictive models using structured and unstructured data from data lakes and other enterprise sources. Collaborating with data engineers to process and transform data pipelines across Azure-based environments, you will develop and integrate applications with Microsoft Dataverse for intelligent business process automation. Applying best practices in data structures and algorithm design, you will ensure high performance and scalability of AI applications. Your role will involve training, testing, and deploying machine learning, deep learning, and neural network models in production environments. Furthermore, you will be responsible for ensuring model governance, performance monitoring, and continuous learning using Azure MLOps pipelines. Collaborating cross-functionally with data scientists, product teams, and cloud architects, you will drive AI innovation within the organization. As a qualified candidate, you hold a Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. With 3+ years of hands-on experience in AI/ML development, you possess practical experience with Copilot Studio and Microsoft Dataverse integrations. Expertise in Microsoft Azure is essential, particularly with services such as Azure Machine Learning, Azure Data Lake, and Azure AI Foundry. Proficiency in Python and machine learning libraries like Scikit-learn, Pandas, and NumPy is required. A solid understanding of data structures, algorithms, and object-oriented programming is essential, along with experience in data lakes, data pipelines, and large-scale data processing. Your deep understanding of neural networks, deep learning frameworks (e.g., TensorFlow, PyTorch), and model tuning will be valuable in this role. Familiarity with MLOps practices and lifecycle management on cloud platforms is beneficial. Strong problem-solving abilities, communication skills, and team collaboration are important attributes for this position. Preferred qualifications include Azure AI or Data Engineering certification, experience in deploying AI-powered applications in enterprise or SaaS environments, knowledge of generative AI or large language models (LLMs), and exposure to REST APIs, CI/CD pipelines, and version control systems like Git.,

Posted 1 week ago

Apply

3.0 - 5.0 years

1 - 5 Lacs

Bhubaneswar, Odisha, India

On-site

Job Description: Software Development Engineer Project Role: Title:Software Development Engineer Description:Analyze, design, code, and test multiple components of application code across one or more clients. Perform maintenance, enhancements, and/or development work. Must Have Skills: PySpark Good to Have Skills: N/A Experience: Minimum 3 years of experience required Educational Qualification: 15 years full-time education Summary: As a Software Development Engineer, you will analyze, design, code, and test multiple components of application code across one or more clients. You will perform maintenance, enhancements, and/or development work, contributing to the overall success of the projects. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Collaborate with team members to analyze, design, and implement software solutions. Develop and maintain efficient and reliable code following best practices. Participate in code reviews and provide constructive feedback to peers. Troubleshoot and debug software applications to ensure optimal performance. Stay updated on emerging technologies and apply them to projects. Professional & Technical Skills: Must Have Skills:Proficiency in PySpark, Python. Strong understanding of data processing and manipulation using PySpark. Experience with distributed computing frameworks like Apache Spark. Knowledge of data analytics and machine learning concepts. Hands-on experience in developing scalable and efficient data pipelines. Additional Information: The candidate should have a minimum of 3 years of experience in PySpark. This position is based at our Bhubaneswar office. A 15 years full-time education is required.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Back End Developer, you will be responsible for various day-to-day tasks related to back-end web development, software development, and object-oriented programming (OOP). This contract role offers a hybrid work model, with the primary location being Hyderabad and some flexibility for work-from-home. You should possess proficiency in Back-End Web Development and Software Development, along with a strong understanding of Object-Oriented Programming (OOP). Basic skills in Front-End Development, solid programming skills, and experience with Cloud platforms such as GCP/AWS are essential for this role. Additionally, excellent problem-solving and analytical skills are required, enabling you to work effectively both independently and as part of a team. Experience with Python frameworks like Django or Flask would be advantageous. It would also be beneficial if you have worked on data pipelines using tools like Airflow, Netflix Conductor, etc., and have experience with Apache Spark/beam and Kafka. This role offers an exciting opportunity for a skilled Python Back End Developer to contribute to a dynamic team and work on challenging projects in a collaborative environment.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a GenAI Model Developer, your role will involve building and deploying GenAI Models using techniques such as RAG and Fine Tuning. You will be responsible for developing AI/ML algorithms to analyze large volumes of historical data in order to make predictions and recommendations. Additionally, you will implement and optimize deep learning models for generative tasks like image synthesis and voice applications. Collaboration with software engineers to integrate Generative AI models into production systems will be a key aspect of your role. You should be able to evaluate application cases and the problem-solving potential of AI/ML algorithms, ranking them according to their likelihood of success. This will involve comprehending data through exploration and visualization, as well as identifying discrepancies in data distribution. Working with both structured and unstructured data, you will develop various algorithms based on statistical modeling procedures and build scalable machine learning solutions for production. Leveraging cloud platforms for training and deploying large-scale solutions, with a preference for AWS, will also be part of your responsibilities. You should have a working knowledge of managing the ModelOps framework and understand CI/CD processes for product deployment. Collaborating with data engineers to build data and model pipelines, ensuring accuracy, will be essential. You are expected to take complete ownership of assigned projects and have experience working in Agile environments. Proficiency in tools such as JIRA or equivalent project tracking tools is required to succeed in this role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

tiruchirappalli, tamil nadu

On-site

INFOC is seeking a motivated and curious Associate Data & AI Engineer to assist in the development of AI-enabled analytics solutions using Microsoft Fabric, Power BI, and Azure Data Services. In this role, you will work closely with senior architects and consultants on real-world transformation projects, enabling customers to derive actionable insights through modern data platforms. This position is ideal for early-career professionals who are enthusiastic about working with cutting-edge Microsoft technologies, honing their skills, and driving tangible business outcomes. Your primary responsibilities will include supporting the development of data pipelines, dataflows, and transformations utilizing Microsoft Fabric and Azure tools. You will also play a key role in constructing Power BI dashboards and semantic models tailored to meet customer reporting requirements. Collaboration with solution architects will be essential as you prepare, cleanse, and model data sourced from a variety of systems including ERP, CRM, and external sources. Additionally, you will engage in Proof-of-Concepts (PoCs) involving AI integrations such as Azure OpenAI and Azure ML Studio, conduct data validation, testing, and performance tuning, and document technical processes, solution architecture, and deployment steps. The ideal candidate will possess at least 2-5 years of experience in data analytics, engineering, or AI solution development. Hands-on proficiency with Power BI (datasets, visuals, DAX) and basic Azure Data tools (Data Factory, Data Lake) is required. Exposure to, or a willingness to learn, Microsoft Fabric, Lakehouses, and AI workloads is highly valued. A strong grasp of SQL, data modeling, and visualization principles is essential, and familiarity with Python or Power Query (M) is advantageous. Strong analytical and communication skills, along with a problem-solving mindset, are crucial. Possessing a Microsoft certification (PL-300, DP-203, or similar) would be a bonus. At INFOC, you will gain valuable real-world project experience in AI, Data Engineering, and Business Intelligence. You will receive mentorship from senior Microsoft-certified consultants, have the opportunity to progress into roles such as AI Solution Architect or Power BI Specialist, access certifications and structured learning paths, and be part of a collaborative and innovation-driven culture. Begin your journey in the field of AI & analytics with INFOC today. Apply now by sending your resume to: careers@infoc.com To learn more about INFOC, visit: www.infoc.com,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your role will involve working on both batch and real-time ingestion and transformation, integrating with Azure Data Factory for smooth data flow, and collaborating with data architects to implement governed Lakehouse models in Microsoft Fabric. You will be expected to monitor and optimize the performance of data pipelines and notebooks in Microsoft Fabric, applying tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery. Collaboration with cross-functional teams, including BI developers, analysts, and data scientists, is essential to gather requirements and build high-quality datasets. Additionally, you will need to document pipeline logic, lakehouse architecture, and semantic layers clearly, following development standards and contributing to internal best practices for Microsoft Fabric-based solutions. To excel in this role, you should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric, Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required, along with a strong command of SQL, PySpark, and Python applied to data integration and analytical workloads. Experience in optimizing pipelines and managing compute resources for cost-effective data processing in Azure/Fabric is also crucial. Preferred skills for this role include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, and understanding of Microsoft Purview, Unity Catalog, or Fabric-native tools for metadata, lineage, and access control. Exposure to DevOps practices for Fabric and Power BI, as well as knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines, would be considered a plus. If you are passionate about developing efficient data solutions in a collaborative environment and have a strong background in data engineering within the Azure ecosystem, this role as a Senior Data Engineer at Srijan Technologies PVT LTD could be the perfect fit for you. Apply now to be a part of a dynamic team driving innovation in data architecture and analytics.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The Engineering Manager position at Miimansa involves leading the end-to-end product engineering lifecycle within the healthcare and life sciences domain. As an Engineering Manager, you will be responsible for providing technical oversight, project planning and execution, innovation, and problem-solving to ensure the quality, reliability, security, and scalability of AI systems in production environments. Your key responsibilities will include hands-on leadership in AI/ML system design and deployment, guiding solution architecture, data pipelines, algorithm selection, and model development. You will define project scope, objectives, and deliverables, align day-to-day execution with strategic goals, and manage detailed planning, timeline, and resource allocation for world-class delivery standards within time and budget constraints. To qualify for this role, you should have a Bachelors/Masters/Ph.D. in Computer Science, Machine Learning, Engineering, or a related technical field. You must possess proven hands-on experience in AI/ML system development, strong foundational knowledge of data engineering, machine learning algorithms, and cloud-native architectures, and demonstrated ability to lead from within the team while actively contributing to technical problem-solving. Additionally, you should have experience managing complex projects with cross-functional teams, familiarity with ML Ops, CI/CD practices, versioning, and monitoring AI systems in production, as well as strong interpersonal and leadership skills. A track record of fostering innovation, mentoring talent, and elevating team performance through technical and emotional intelligence will be highly valued. Preferred qualifications include experience in healthcare IT or clinical research informatics, familiarity with healthcare data standards like HL7 and FHIR, knowledge of AI/ML applications in healthcare, and an understanding of regulatory requirements in healthcare software development. If you are looking to make a significant impact in the field of AI/ML within healthcare and life sciences, this role at Miimansa offers a unique opportunity to lead innovative technology solutions and drive continuous improvement in a collaborative and dynamic environment.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Manager at Autodesk, you will lead the BI and Data Engineering Team to develop and implement business intelligence solutions. Your role is crucial in empowering decision-makers through trusted data assets and scalable self-serve analytics. You will oversee the design, development, and maintenance of data pipelines, databases, and BI tools to support data-driven decision-making across the CTS organization. Reporting to the leader of the CTS Business Effectiveness department, you will collaborate with stakeholders to define data requirements and objectives. Your responsibilities will include leading and managing a team of data engineers and BI developers, fostering a collaborative team culture, managing data warehouse plans, ensuring data quality, and delivering impactful dashboards and data visualizations. You will also collaborate with stakeholders to translate technical designs into business-appropriate representations, analyze business needs, and create data tools for analytics and BI teams. Staying up to date with data engineering best practices and technologies is essential to ensure the company remains ahead of the industry. To qualify for this role, you should have 3 to 5 years of experience managing data teams and a BA/BS in Data Science, Computer Science, Statistics, Mathematics, or a related field. Proficiency in Snowflake, Python, SQL, Airflow, Git, and big data environments like Hive, Spark, and Presto is required. Experience with workflow management, data transformation tools, and version control systems is preferred. Additionally, familiarity with Power BI, AWS environment, Salesforce, and remote team collaboration is advantageous. The ideal candidate is a data ninja and leader who can derive insights from disparate datasets, understand Customer Success, tell compelling stories using data, and engage business leaders effectively. At Autodesk, we are committed to creating a culture where everyone can thrive and realize their potential. Our values and ways of working help our people succeed, leading to better outcomes for our customers. If you are passionate about shaping the future and making a meaningful impact, join us in our mission to turn innovative ideas into reality. Autodesk offers a competitive compensation package based on experience and location. In addition to base salaries, we provide discretionary annual cash bonuses, commissions, stock grants, and a comprehensive benefits package. If you are interested in a sales career at Autodesk or want to learn more about our commitment to diversity and belonging, please visit our website for more information.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

kollam, kerala

On-site

The ideal candidate for the Research Assistant/Associate/Senior Research Scientist/Post-Doctoral Fellows (Data-Driven Sciences) position in Kollam, Kerala should possess a Bachelor's/Master's/Ph.D. degree in Computer Science, Artificial Intelligence, or Electrical and Computer Engineering. Additionally, the candidate should have strong programming skills in Python and R, proficiency in machine learning and deep learning techniques, along with excellent analytical and problem-solving abilities. Effective communication and teamwork skills are also essential for this role. Key Responsibilities: - Data Analysis and Processing: - Clean, preprocess, and explore large and complex datasets. - Employ advanced data mining techniques to extract meaningful insights. - Develop data pipelines for efficient data ingestion and transformation. - Model Development and Evaluation: - Design, implement, and evaluate machine learning and deep learning models. - Optimize model performance through hyperparameter tuning and feature engineering. - Assess model accuracy, precision, recall, and other relevant metrics. - Research Collaboration: - Collaborate with researchers to identify research questions and formulate hypotheses. - Contribute to the development of research papers and technical reports. - Present research findings at conferences and workshops. - Tool Proficiency: - Utilize data science tools and libraries such as Python (Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch), R, SQL, LLM, and cloud platforms (AWS, GCP, Azure). - Stay up-to-date with the latest advancements in data science and machine learning. This position falls under the Research category and the deadline for applications is July 31, 2025.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines and notebooks in Microsoft Fabric will be a key aspect of your role. You will apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Working closely with BI developers, analysts, and data scientists, you will gather requirements and build high-quality datasets to support self-service BI initiatives. Additionally, documenting pipeline logic, lakehouse architecture, and semantic layers clearly will be essential. Your experience with Lakehouses, Notebooks, Data Pipelines, and Direct Lake in Microsoft Fabric will be crucial in delivering reliable, secure, and efficient data solutions that integrate with Power BI, Azure Synapse, and other Microsoft services. You should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric components such as Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required. A strong command of SQL, PySpark, Python, and experience in optimising pipelines for cost-effective data processing in Azure/Fabric are necessary. Preferred skills include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, as well as understanding of Microsoft Purview or Unity Catalog. Exposure to DevOps practices for Fabric and Power BI, and knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines would be advantageous.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining Amgen, a company that leverages biology and technology to combat challenging diseases and enhance people's lives by providing innovative medicines to millions of patients. With a history of over 40 years in the biotechnology industry, Amgen continues to pioneer innovation through the use of cutting-edge technology and human data. As a Senior Software Architect at Amgen's AI & Data Innovation Lab, you will play a crucial role in the software engineering practice by developing top-tier talent, establishing engineering best practices, and promoting full-stack development capabilities within the organization. Your primary responsibilities will involve designing end-to-end architecture for digital products incorporating AI features, ensuring performance, robustness, and scalability, as well as selecting frameworks and tools to enable standardization and repeatability. You will collaborate closely with software and AI engineers to choose data models, develop modeling approaches, and define versioning strategies and continuous delivery processes for models and APIs. Additionally, you will be responsible for overseeing model monitoring and maintenance processes, scaling strategies, and establishing pipelines for model deployment and retraining. Your role will also involve conducting architectural reviews, developing standards and best practices in AI and full-stack engineering, and providing technical mentorship to the engineering team. To excel in this role, you should have a deep understanding of software engineering best practices, proficiency in software product development lifecycle, and proven experience in designing end-to-end solutions with modular components and APIs for scale, low latency, and high availability. You should also possess expertise in data flow within AI systems, model monitoring, maintenance, scaling, and deployment strategies, as well as proficiency in backend languages and frameworks, web technologies, and databases. Furthermore, familiarity with enterprise software systems in life sciences or healthcare domains, big data platforms, data pipeline development, and knowledge of data security and privacy regulations would be advantageous. Strong communication skills, problem-solving abilities, attention to detail, self-motivation, and the ability to foster a collaborative work environment are essential for success in this role. Basic qualifications for this position include a Bachelor's degree in Computer Science, AI, Software Engineering, or a related field, along with a minimum of 8 years of experience in full-stack software engineering, including at least 3 years in an architecture role. At Amgen, we are committed to providing equal opportunities for all individuals, including those with disabilities, by offering reasonable accommodation throughout the job application process, interview process, essential job functions, and other employment benefits and privileges. If you require any accommodations, please reach out to us to request assistance.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a DataOps Engineer, you will be responsible for designing and maintaining scalable ML model deployment infrastructure using Kubernetes and Docker. Your role will involve implementing CI/CD pipelines for ML workflows, ensuring security best practices are followed, and setting up monitoring tools to track system health, model performance, and data pipeline issues. You will collaborate with cross-functional teams to streamline the end-to-end lifecycle of data products and identify performance bottlenecks and data reliability issues in the ML infrastructure. To excel in this role, you should have strong experience with Kubernetes and Docker for containerization and orchestration, hands-on experience in ML model deployment in production environments, and proficiency with orchestration tools like Airflow or Luigi. Familiarity with monitoring tools such as Prometheus, Grafana, or ELK Stack, knowledge of security protocols, CI/CD pipelines, and DevOps practices in a data/ML environment are essential. Exposure to cloud platforms like AWS, GCP, or Azure is preferred. Additionally, experience with MLflow, Seldon, or Kubeflow, knowledge of data governance, lineage, and compliance standards, and understanding of data pipelines and streaming frameworks would be advantageous in this role. Your expertise in data pipelines, Docker, Grafana, Airflow, CI/CD pipelines, orchestration tools, cloud platforms, compliance standards, data governance, ELK Stack, Kubernetes, lineage, ML, streaming frameworks, ML model deployment, and DevOps practices will be key to your success in this position.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at our company, you will play a crucial role in supporting the development and deployment of machine learning models and analytics solutions that enhance decision-making processes throughout the mortgage lifecycle, spanning from acquisition to servicing. Your responsibilities will involve building predictive models, customer segmentation tools, and automation workflows to drive operational efficiency and improve customer outcomes. Collaborating closely with senior data scientists and cross-functional teams, you will be tasked with translating business requirements into well-defined modeling tasks, with opportunities to leverage natural language processing (NLP), statistical modeling, and experimentation frameworks within a regulated financial setting. You will report to a senior leader in Data Science. Your key responsibilities will include: - Developing and maintaining machine learning models and statistical tools for various use cases such as risk scoring, churn prediction, segmentation, and document classification. - Working collaboratively with Product, Engineering, and Analytics teams to identify data-driven opportunities and support automation initiatives. - Translating business inquiries into modeling tasks, contributing to experimental design, and defining success metrics. - Assisting in the creation and upkeep of data pipelines and model deployment workflows in collaboration with data engineering. - Applying techniques such as supervised learning, clustering, and basic NLP to structured and semi-structured mortgage data. - Supporting model monitoring, performance tracking, and documentation to ensure compliance and audit readiness. - Contributing to internal best practices, engaging in peer reviews, and participating in knowledge-sharing sessions. - Staying updated with advancements in machine learning and analytics pertinent to the mortgage and financial services sector. Qualifications: - Minimum education required: Masters or PhD in engineering, math, statistics, economics, or a related field. - Minimum years of experience required: 2 (or 1 post-PhD), preferably in mortgage, fintech, or financial services. - Required certifications: None Specific skills or abilities needed: - Experience working with structured and semi-structured data; exposure to NLP or document classification is advantageous. - Understanding of the model development lifecycle, encompassing training, validation, and deployment. - Familiarity with data privacy and compliance considerations (e.g., ECOA, CCPA, GDPR) is desirable. - Strong communication skills and the ability to present findings to both technical and non-technical audiences. - Proficiency in Python (e.g., scikit-learn, pandas), SQL, and familiarity with ML frameworks like TensorFlow or PyTorch.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

We are seeking a highly motivated Deep Learning Engineer with specialized expertise in computer vision and audio analysis. As a part of our team developing AI-driven solutions utilizing multi-modal deep learning, you will play a crucial role in designing and implementing deep learning models for various tasks such as image, video, object detection, and audio classification. Your responsibilities will include integrating attention mechanisms into model architectures, utilizing pretrained models for transfer learning, and working with video data using spatiotemporal modeling techniques. Additionally, you will be responsible for extracting and processing features from audio and evaluating and optimizing models for speed, accuracy, and robustness. Collaboration across teams to deploy models into production will also be a key aspect of this role. The ideal candidate should have strong programming skills in Python, proficiency in PyTorch or TensorFlow, and hands-on experience with CNNs, pretrained networks, and attention modules. A solid understanding of Vision Transformers, recent architectures, and attention mechanisms is essential. Experience in implementing and training object detection models, video analysis, temporal modeling, and audio classification workflows is highly desired. Moreover, familiarity with handling large-scale datasets, designing data pipelines, and training strategies for deep models will be beneficial for this position. If you are passionate about deep learning, possess the required skills and experience, and are eager to contribute to cutting-edge AI solutions, we encourage you to apply for this position. Join us at CureBay and be a part of our dynamic team dedicated to pushing the boundaries of technology and innovation.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Technical Specialist - Cloud Engineer, you will be responsible for designing, developing, and delivering significant components of engineering solutions to achieve business goals. Your key responsibilities will include active participation in designing and developing new features, ensuring solutions are maintainable and integrated successfully, assisting junior team members, and controlling their work where applicable. You will develop source code, CI/CD pipelines, infrastructure and application configurations, following detailed software requirements. Additionally, you will provide quality development for technical infrastructure components such as Cloud configuration, Networking, Security, Storage, and Infrastructure as Code. Debugging, fixing, and supporting L3 and L2 teams, as well as verifying source code through reviews, conducting unit testing, and integrating software components are also part of your role. Moreover, you will contribute to problem analysis, root cause analysis, architectural changes implementation, and software product training materials creation. Managing application maintenance, performing technical change requests, identifying dependencies between components, suggesting continuous technical improvements, and collaborating with colleagues in different stages of the Software Development Lifecycle are also essential responsibilities. In terms of skills and experience, you should hold a Bachelor of Science degree in Computer Science or Software Engineering, with strong analytical and communication skills. Fluency in English, the ability to work in virtual teams, proficiency in Cloud native systems and applications, and relevant Financial Services experience are required. Additionally, you should have expertise in Cloud offerings, Cloud services, Cloud native Development, DevOps, API management, Java or Python, ETL/Data pipelines, and core tools like HP ALM, Jira, and SDLC. At our company, you will receive training, development, coaching, and support to excel in your career. We foster a culture of continuous learning and offer a range of flexible benefits tailored to suit your needs. We strive for a collaborative environment where you can excel every day, act responsibly, think commercially, and take initiative. For further information about our company and teams, please visit our website: https://www.db.com/company/company.htm. We welcome applications from all individuals and promote a positive, fair, and inclusive work environment.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

At ClearTrail, you will be part of a team dedicated to developing solutions that empower those focused on ensuring the safety of individuals, locations, and communities. For over 23 years, ClearTrail has been a trusted partner of law enforcement and federal agencies worldwide, committed to safeguarding nations and enhancing lives. We are leading the way in the future of intelligence gathering through the creation of innovative artificial intelligence and machine learning-based lawful interception and communication analytics solutions aimed at addressing the world's most complex challenges. We are currently looking for a Big Data Java Developer to join our team in Indore with 2-4 years of experience. As a Big Data Java Developer at ClearTrail, your responsibilities will include: - Designing and developing high-performance, scalable applications using Java and big data technologies. - Building and maintaining efficient data pipelines for processing large volumes of structured and unstructured data. - Developing microservices, APIs, and distributed systems. - Experience working with Spark, HDFS, Ceph, Solr/Elasticsearch, Kafka, and Delta Lake. - Mentoring and guiding junior team members. If you are a problem-solver with strong analytical skills, excellent verbal and written communication abilities, and a passion for developing cutting-edge solutions, we invite you to join our team at ClearTrail and be part of our mission to make the world a safer place.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies