Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 11.0 years
15 - 19 Lacs
bengaluru
Work from Office
About The Role Project Role : Technology Architect Project Role Description : Design and deliver technology architecture for a platform, product, or engagement. Define solutions to meet performance, capability, and scalability needs. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data Architecture Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Technology Architect, you will engage in a dynamic environment where you will review and integrate all application requirements, ensuring that functional, security, integration, performance, quality, and operations needs are met. Your typical day will involve collaborating with various teams to assess technical architecture requirements and providing valuable input into critical decisions regarding hardware, network products, system software, and security measures. You will play a pivotal role in shaping the technological landscape of the organization, ensuring that all components work harmoniously to achieve business objectives. Roles & ResponsibilitiesShould have a minimum of 8 years of experience in Databricks Unified Data Analytics Platform. Good experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform Should have strong educational background in technology and information architectures, along with a proven track record of delivering impactful data-driven solutions. Strong requirement analysis and technical solutioning skill in Data and Analytics Client facing role in terms of running solution workshops, client visits, handled large RFP pursuits and managed multiple stakeholders. Technical Experience6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills.Education Qualification:- 15 years full time education is required. Qualification 15 years full time education
Posted -1 days ago
5.0 - 10.0 years
14 - 17 Lacs
mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted -1 days ago
5.0 - 10.0 years
14 - 17 Lacs
navi mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted -1 days ago
5.0 - 10.0 years
7 - 13 Lacs
bengaluru
Work from Office
We are seeking a highly skilled Senior Data Engineer with 5+ years of experience for our Bengaluru location (max 30 days notice period). The ideal candidate will have strong expertise in designing, developing, and maintaining robust data ingestion frameworks, scalable pipelines, and DBT-based transformations. Responsibilities include building and optimizing DBT models, architecting ELT pipelines with orchestration tools like Airflow/Prefect, integrating workflows with AWS services (S3, Lambda, Glue, RDS), and ensuring performance optimization on platforms like Snowflake, Redshift, and Databricks. The candidate will implement CI/CD best practices for DBT, manage automated deployments, troubleshoot pipeline issues, and collaborate cross-functionally to deliver cloud-based real-time and batch data solutions. Strong SQL, scripting, API integrations, and AWS experience are essential.
Posted 2 hours ago
5.0 - 9.0 years
8 - 14 Lacs
ludhiana
Work from Office
Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure. Mandatory Key SkillsSIEM,Data Ingestion,data onboarding,Data Visualization,Dashboarding,Splunk*
Posted 2 hours ago
5.0 - 9.0 years
8 - 14 Lacs
jaipur
Work from Office
Key Responsibilities:Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation.SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis.Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language).Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.KeywordsDashboard,Data Visualization,Splunk SPL,Data Ingestion,Splunk SIEM solutions,IT Service Intelligence,Splunk ITSI Implementation*Mandatory Key SkillsDashboard,Data Visualization,Splunk SPL,Data Ingestion,Splunk SIEM solutions,IT Service Intelligence,Splunk ITSI Implementation*
Posted 2 days ago
5.0 - 9.0 years
8 - 14 Lacs
kolkata
Work from Office
Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.KeywordsSIEM,Data Ingestion,data onboarding,Data Visualization,Dashboarding,Splunk*Mandatory Key SkillsSIEM,Data Ingestion,data onboarding,Data Visualization,Dashboarding,Splunk*
Posted 2 days ago
5.0 - 9.0 years
8 - 14 Lacs
bengaluru
Work from Office
Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure. Mandatory Key Skills IT Service Intelligence,Data Ingestion,Splunk SPL,Splunk SIEM,SIEM Development,Splunk*
Posted 2 days ago
4.0 - 8.0 years
3 - 7 Lacs
pune
Work from Office
Job LocationPune Position Summary: As a data engineer, you will be responsible for delivering data intelligence solutions to our customers all around the globe, based on an innovative product, which provides insights into the performance of their material handling systems. You will be working on implementing and deploying the product as well as designing solutions to fit it to our customer needs. You will work together with an energetic and multidisciplinary team to build end-to-end data ingestion pipelines and implement and deploy dashboards. Roles and responsibilities You will design and implement data & dashboarding solutions to maximize customer value. You will deploy and automate the data pipelines and dashboards to enable further project implementation. You embrace working in an international, diverse team, with an open and respectful atmosphere. You leverage data by making it available for other teams within our department as well to enable our platform vision. Communicate and work closely with other groups within Vanderlande and the project team. You enjoy an independent and self-reliant way of working with a proactive style of communication to take ownership to provide the best possible solution. You will be part of an agile team that encourages you to speak up freely about improvements, concerns, and blockages. As part of Scrum methodology, you will independently create stories and participate in the refinement process. You collect feedback and always search for opportunities to improve the existing standardized product. Execute projects from conception through client handover with a positive contribution on technical performance and the organization. You will take the lead in communication with different stakeholders that are involved in the projects that are being deployed. Skills: Bachelor's or master's degree in computer science, IT, or equivalent and a minimum of 4 to 8 years of experience building and deploying complex data pipelines and data solutions. Bachelor's or master's degree in computer science, IT, or equivalent (for junior profiles). Experience deploying data pipelines using technologies like Databricks. Hands on experience with Java. Hands-on experience in Databricks. Experience with visualization software, preferably Splunk (or else Grafana, Prometheus, PowerBI, Tableau, or similar). Strong experience with SQL , Java with hands-on experience in data modeling. Experience with Pyspark or Spark to deal with distributed data. Good to have knowledge on Splunk (SPL) Experience with data schemas (e.g. JSON/XML/Avro). Experience in deploying services as containers (e.g. Docker, Kubernetes). Experience in working with cloud services (preferably with Azure). Experience with streaming and/or batch storage (e.g. Kafka, streaming platform) is a plus. Experience in data quality management and monitoring is a plus. Strong communication skills in English.
Posted 2 days ago
5.0 - 9.0 years
8 - 14 Lacs
lucknow
Work from Office
Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure. Mandatory Key SkillsIT Service Intelligence,Data Ingestion,Splunk SPL,Splunk SIEM,SIEM Development,Splunk*
Posted 2 days ago
3.0 - 7.0 years
9 - 14 Lacs
kochi
Work from Office
Design, develop, and maintain scalable and efficient big data processing pipelines distributed computing systems. Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. Implement data ingestion, processing, and transformation processes to support various analytical and machine learning use cases. Optimize and tune data pipelines for performance, scalability, and reliability. Monitor and troubleshoot pipeline performance issues, identifying and resolving bottlenecks. Ensure data quality and integrity throughout the pipeline, implementing data validation and error handling mechanisms. Stay updated on emerging technologies and best practices in big data processing and analytics, incorporating them into our data engineering practices. Document design decisions, technical specifications, and data workflows.
Posted 2 days ago
6.0 - 9.0 years
9 - 13 Lacs
mumbai
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 3 days ago
2.0 - 5.0 years
4 - 8 Lacs
hyderabad
Work from Office
NTT DATA Services currently seeks Python Hadoop Developer to join our team in Responsibilities Skillset: Python, Hadoop, ETL, RDBMS, Unix, GCP Vertex AI Responsibilities: Data Ingestion Pipelines, AIML Model deployments, Data Engineering, ML Engineering with Python.
Posted 3 days ago
2.0 - 7.0 years
0 Lacs
maharashtra
On-site
Role Overview: As a Hadoop Admin for our client in Mumbai, you will be responsible for managing and administrating the on-premise Hortonworks Hadoop cluster. Your role will involve tasks such as user access management, data lake monitoring, designing and setting up Hadoop clusters, and managing multiple Hadoop utilities. Key Responsibilities: - Hands-on experience in managing and administrating on-premise Hortonworks Hadoop cluster - Knowledge on user access management - Data lake monitoring including cluster health checkup, database size, no of connections, IOs, edge node utilities, load balancing - Experience in designing, estimating, and setting up Hadoop cluster - Managing multiple Hadoop utilities such as data ingestion, extraction, transformation, reporting, exploratory reporting, advanced analytics, and ML platform Qualifications Required: - Experience: 2 years to 7 years - Location: Mumbai (Lower Parel or Bandra Kurla complex) - Job ID: Hadoop_Admin_Mumbai - Job Level: Mid Level - Job Industry: IT - Qualification: Any - Annual CTC: Open - Number of Vacancies: 3 - Notice Period: Shortnotice Please send updated resumes to recruiter1@pnrsoftsol.com if you are interested in this opportunity.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
You are a highly skilled and motivated Power BI Developer with 4-6 years of experience in designing, developing, and deploying Power BI solutions. Your role involves transforming raw data into actionable insights through interactive dashboards and reports while ensuring secure and scalable data access. **Responsibilities:** - Develop and maintain Power Query (M) scripts for data transformation and ingestion. - Integrate data from multiple sources including SQL Server, Excel, APIs, cloud platforms (Azure, AWS), and third-party connectors. - Configure and manage Power BI Gateways for scheduled data refreshes. - Implement Row-Level Security (RLS) to ensure secure data access. - Publish and manage reports in Power BI Service, including workspace management and app deployment. - Collaborate with business stakeholders to gather requirements and translate them into technical solutions. - Create wireframes and mockups using Figma, Miro or similar tools to visualize dashboard layouts and user journeys. - Optimize performance of reports and datasets. - Stay updated with the latest Power BI features and best practices. **Key Responsibilities:** - Develop and maintain Power Query (M) scripts for data transformation and ingestion. - Integrate data from multiple sources including SQL Server, Excel, APIs, cloud platforms (Azure, AWS), and third-party connectors. - Configure and manage Power BI Gateways for scheduled data refreshes. - Implement Row-Level Security (RLS) to ensure secure data access. - Publish and manage reports in Power BI Service, including workspace management and app deployment. - Collaborate with business stakeholders to gather requirements and translate them into technical solutions. - Create wireframes and mockups using Figma, Miro or similar tools to visualize dashboard layouts and user journeys. - Optimize performance of reports and datasets. - Stay updated with the latest Power BI features and best practices. **Key Skills:** - Strong proficiency in DAX and Power Query (M). - Experience with data modeling, star/snowflake schemas, and normalization. - Hands-on experience with Power BI Service, including dashboard publishing and workspace management. - Strong Understanding of Power BI Gateways and data refresh scheduling. - Hands-on experience implementing Row-Level Security (RLS). - Familiarity with data ingestion from various sources (SQL, Excel, REST APIs, cloud storage). - Experience in wireframing tools like Miro, Figma, or Balsamiq. - Understanding of version control and deployment pipelines (Dev/Test/Prod). - Excellent problem-solving and communication skills. - Microsoft Certified: Power BI Data Analyst Associate (PL-300) or equivalent. - Exposure to embedding Power BI dashboards into external platforms (e.g., web apps, SharePoint). **About the Company:** NeuIQ is a new-age technology services firm specializing in solving enterprise business transformation and experience challenges through cutting-edge, AI-powered data and technology solutions. Their vision is to build a scalable and profitable technology implementation business with data engineering as its foundation. NeuIQ's expertise lies in implementing enterprise SaaS platforms such as Qualtrics, ServiceNow, Snowflake, and Databricks, enabling organizations to unlock actionable insights and maximize the value of their AI and technology investments. They are committed to empowering enterprises to stay relevant, impactful, and ahead in today's dynamic landscape.,
Posted 3 days ago
6.0 - 11.0 years
22 - 37 Lacs
gurugram, chennai, bengaluru
Work from Office
Why Choose Decision Point At Decision Point , we empower data-driven transformation by delivering innovative, scalable, and future-ready analytics solutions. With a proven track record in the CPG, Retail, and Manufacturing industries, we combine deep domain expertise with cutting-edge technologies to enable smarter decisions at every level of the enterprise. By joining our team, youll be part of a collaborative culture that values creativity, learning, and continuous improvement. We offer opportunities to work on high-impact projects with global clients, and leverage the latest cloud and data engineering tools to solve real-world business challenges. Key Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using tools such as PySpark , SQL , Python , and DBT . Lead the data ingestion and transformation processes from multiple sources into cloud data platforms (Azure, AWS, Snowflake). Architect and implement data models , ensuring data consistency, integrity, and performance optimization. Oversee data architecture initiatives and contribute to best practices in data engineering, data quality, and governance. Implement and manage cloud services (Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, etc.). Collaborate with cross-functional teams including data scientists, BI developers, and product owners. Provide technical leadership and mentorship to junior engineers and analysts. Monitor and troubleshoot data pipelines to ensure reliability and performance. Key Skills and Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related field. 7+ years of experience in Data Engineering or related roles. Proficiency in Python , SQL , and PySpark . Strong experience with cloud platforms : Azure and/or AWS . Hands-on experience with ETL development , data pipeline orchestration , and automation . Solid understanding of data modeling (Dimensional, Star/Snowflake schema). Experience with Snowflake , DBT , and modern data stack tools. Familiarity with CI/CD pipelines , version control (Git), and agile methodologies. Excellent communication, problem-solving, and leadership skills.
Posted 3 days ago
4.0 - 7.0 years
7 - 11 Lacs
noida
Work from Office
Develop and maintain Python scripts to support the deployment and integration of GenAI models. Automate data ingestion, preprocessing, and model execution workflows using Python and other relevant tools. Build interfaces between GenAI models and existing systems or APIs. Monitor and optimize performance of deployed models and pipelines. Ensure robust logging, error handling, and system reliability. Stay updated with advancements in GenAI technologies and suggest improvements. Experience in Agile Development Process. Excellent communication skills, problem solving and debugging and troubleshooting Skills. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - Gen AI Agile - Agile - Extreme Programming Programming Language - Python - Python Shell ETL - ETL - AWS Glue Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration
Posted 3 days ago
8.0 - 13.0 years
2 - 2 Lacs
hyderabad
Work from Office
SUMMARY Job Role: Senior Database Engineer Location: Hyderabad Start Date: As soon as possible Key Responsibilities: Build data pipelines for optimal extraction, transformation, and loading from various data sources using SQL and cloud database technologies. Work with stakeholders (Executive, Product, Data, Design teams) to support data-related technical issues. Collaborate with data and analytics experts to enhance data system functionality. Assemble large, complex data sets meeting business requirements. Analyze and improve existing SQL code for performance, security, and maintainability. Design and implement internal process improvements (automation, scalability). Unit test databases and perform bug fixes. Develop best practices for database design and development. Lead database projects across scrum teams. Support dashboard development through exploratory data analysis (desirable). Key Requirements: Experience: 8 12 years preferred Required Skills: Strong SQL experience, especially with PostgreSQL (cloud-hosted in AWS/Azure/GCP). Experience with cloud-based data warehouses like Snowflake (preferred) or Azure Synapse. Proficiency in ETL/ELT tools like IBM StreamSets, SnapLogic, DBT. Knowledge of data modeling and OLAP systems. Deep understanding of databases, data marts, and enterprise systems. Expertise in data ingestion, cleaning, de-duping. Ability to fine-tune report queries and design indexes. Familiarity with SQL security techniques (e.g., column-level encryption, TDE). Experience mapping source data into ER models (desirable). Adherence to database standards (naming conventions, architecture). Exposure to source control tools (GIT, Azure DevOps). Understanding of Agile methodologies (Scrum, Kanban). Experience with NoSQL databases and real-time replication (desirable). Experience with CI/CD automation tools (desirable). Programming experience in Golang, Python, and visualization tools (Power BI/Tableau) (desirable). Personal Attributes: Strong communication skills. Ability to work in distributed teams. Capable of managing multiple timelines. Able to articulate data insights for business decisions. Comfortable with ambiguity and risk management. Able to explain complex concepts to non-data audiences.
Posted 3 days ago
7.0 - 10.0 years
10 - 14 Lacs
hyderabad
Work from Office
About The Opportunity : A fast-scaling technology company operating in the Enterprise AI / Generative AI sector, building production-grade LLM-driven products and intelligent automation for global clients. We deliver secure, low-latency generative systems that power conversational AI, summarization, code generation, and retrieval-augmented applications across cloud-native environments. We are hiring a Senior Generative AI Engineer (7+ years) to own architecture, model development, and production deployment of advanced generative systems. This is a fully remote role for candidates based in India. Role & Responsibilities : - Lead end-to-end design and delivery of Generative AI/LLM solutionsdata ingestion, pre-processing, model training/fine-tuning, evaluation, and scalable inference. - Develop and productionize transformer-based models (instruction-tuning, LoRA, quantization) using PyTorch/TensorFlow and Hugging Face tooling. - Architect and implement RAG pipelines integrating vector databases (FAISS/Milvus/Chroma), dense / sparse retrieval, and scalable embedding workflows. - Optimize inference throughput and latency using ONNX/TorchScript/TensorRT, autoscaling, and cost-efficient deployment patterns on cloud infra. - Define MLOps best practices : CI/CD for models, containerization, observability, automated retraining, drift detection and rollout strategies. - Mentor engineers, conduct code reviews, and collaborate with product & data science to translate research into reliable production systems. Skills & Qualifications : Must-Have : - 7+ years software/ML engineering experience with significant time on generative/LLM projects. - Strong proficiency in Python and deep learning frameworks (PyTorch preferred; TensorFlow acceptable). - Hands-on experience with Hugging Face Transformers, tokenizers, training and fine-tuning workflows. - Proven experience building RAG systems and working with vector stores (FAISS, Milvus, Chroma) and embedding pipelines. - Experience deploying models to production using Docker, Kubernetes, and cloud services (AWS/GCP/Azure). - Solid software engineering practices : unit testing, CI/CD, code reviews, and monitoring for ML systems. Preferred : - Experience with model compression/acceleration (quantization, distillation), ONNX, or TensorRT. - Familiarity with LangChain or similar orchestration frameworks, agentic workflows, and tool-calling patterns. - Background in prompt engineering, instruction tuning, Reinforcement Learning from Human Feedback (RLHF) exposure. - Knowledge of data privacy, secure model serving, and compliance controls for enterprise deployments. Benefits & Culture Highlights : - Fully remote, India-based role with flexible hours and a results-oriented culture. - Opportunity to shape product architecture and scale cutting-edge generative AI for enterprise customers. - Collaborative environment with senior ML engineers, data scientists, and product stakeholdersmentorship and career growth. To apply, you should be passionate about bringing advanced generative models into production, comfortable with both research-to-production translation and the operational discipline required to run mission-critical AI systems.
Posted 4 days ago
7.0 - 10.0 years
10 - 14 Lacs
gurugram
Work from Office
About The Opportunity : A fast-scaling technology company operating in the Enterprise AI / Generative AI sector, building production-grade LLM-driven products and intelligent automation for global clients. We deliver secure, low-latency generative systems that power conversational AI, summarization, code generation, and retrieval-augmented applications across cloud-native environments. We are hiring a Senior Generative AI Engineer (7+ years) to own architecture, model development, and production deployment of advanced generative systems. This is a fully remote role for candidates based in India. Role & Responsibilities : - Lead end-to-end design and delivery of Generative AI/LLM solutionsdata ingestion, pre-processing, model training/fine-tuning, evaluation, and scalable inference. - Develop and productionize transformer-based models (instruction-tuning, LoRA, quantization) using PyTorch/TensorFlow and Hugging Face tooling. - Architect and implement RAG pipelines integrating vector databases (FAISS/Milvus/Chroma), dense / sparse retrieval, and scalable embedding workflows. - Optimize inference throughput and latency using ONNX/TorchScript/TensorRT, autoscaling, and cost-efficient deployment patterns on cloud infra. - Define MLOps best practices : CI/CD for models, containerization, observability, automated retraining, drift detection and rollout strategies. - Mentor engineers, conduct code reviews, and collaborate with product & data science to translate research into reliable production systems. Skills & Qualifications : Must-Have : - 7+ years software/ML engineering experience with significant time on generative/LLM projects. - Strong proficiency in Python and deep learning frameworks (PyTorch preferred; TensorFlow acceptable). - Hands-on experience with Hugging Face Transformers, tokenizers, training and fine-tuning workflows. - Proven experience building RAG systems and working with vector stores (FAISS, Milvus, Chroma) and embedding pipelines. - Experience deploying models to production using Docker, Kubernetes, and cloud services (AWS/GCP/Azure). - Solid software engineering practices : unit testing, CI/CD, code reviews, and monitoring for ML systems. Preferred : - Experience with model compression/acceleration (quantization, distillation), ONNX, or TensorRT. - Familiarity with LangChain or similar orchestration frameworks, agentic workflows, and tool-calling patterns. - Background in prompt engineering, instruction tuning, Reinforcement Learning from Human Feedback (RLHF) exposure. - Knowledge of data privacy, secure model serving, and compliance controls for enterprise deployments. Benefits & Culture Highlights : - Fully remote, India-based role with flexible hours and a results-oriented culture. - Opportunity to shape product architecture and scale cutting-edge generative AI for enterprise customers. - Collaborative environment with senior ML engineers, data scientists, and product stakeholdersmentorship and career growth. To apply, you should be passionate about bringing advanced generative models into production, comfortable with both research-to-production translation and the operational discipline required to run mission-critical AI systems.
Posted 4 days ago
6.0 - 9.0 years
9 - 13 Lacs
bengaluru
Work from Office
About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 4 days ago
6.0 - 9.0 years
9 - 13 Lacs
noida
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 4 days ago
7.0 - 10.0 years
10 - 14 Lacs
mumbai
Work from Office
Role & Responsibilities : - Lead end-to-end design and delivery of Generative AI/LLM solutionsdata ingestion, pre-processing, model training/fine-tuning, evaluation, and scalable inference. - Develop and productionize transformer-based models (instruction-tuning, LoRA, quantization) using PyTorch/TensorFlow and Hugging Face tooling. - Architect and implement RAG pipelines integrating vector databases (FAISS/Milvus/Chroma), dense / sparse retrieval, and scalable embedding workflows. - Optimize inference throughput and latency using ONNX/TorchScript/TensorRT, autoscaling, and cost-efficient deployment patterns on cloud infra. - Define MLOps best practices : CI/CD for models, containerization, observability, automated retraining, drift detection and rollout strategies. - Mentor engineers, conduct code reviews, and collaborate with product & data science to translate research into reliable production systems. Skills & Qualifications : Must-Have : - 7+ years software/ML engineering experience with significant time on generative/LLM projects. - Strong proficiency in Python and deep learning frameworks (PyTorch preferred; TensorFlow acceptable). - Hands-on experience with Hugging Face Transformers, tokenizers, training and fine-tuning workflows. - Proven experience building RAG systems and working with vector stores (FAISS, Milvus, Chroma) and embedding pipelines. - Experience deploying models to production using Docker, Kubernetes, and cloud services (AWS/GCP/Azure). - Solid software engineering practices : unit testing, CI/CD, code reviews, and monitoring for ML systems. Preferred : - Experience with model compression/acceleration (quantization, distillation), ONNX, or TensorRT. - Familiarity with LangChain or similar orchestration frameworks, agentic workflows, and tool-calling patterns. - Background in prompt engineering, instruction tuning, Reinforcement Learning from Human Feedback (RLHF) exposure. - Knowledge of data privacy, secure model serving, and compliance controls for enterprise deployments. Benefits & Culture Highlights : - Fully remote, India-based role with flexible hours and a results-oriented culture. - Opportunity to shape product architecture and scale cutting-edge generative AI for enterprise customers. - Collaborative environment with senior ML engineers, data scientists, and product stakeholdersmentorship and career growth. To apply, you should be passionate about bringing advanced generative models into production, comfortable with both research-to-production translation and the operational discipline required to run mission-critical AI systems.
Posted 4 days ago
7.0 - 10.0 years
10 - 14 Lacs
bengaluru
Work from Office
About The Opportunity : A fast-scaling technology company operating in the Enterprise AI / Generative AI sector, building production-grade LLM-driven products and intelligent automation for global clients. We deliver secure, low-latency generative systems that power conversational AI, summarization, code generation, and retrieval-augmented applications across cloud-native environments. We are hiring a Senior Generative AI Engineer (7+ years) to own architecture, model development, and production deployment of advanced generative systems. This is a fully remote role for candidates based in India. Role & Responsibilities : - Lead end-to-end design and delivery of Generative AI/LLM solutionsdata ingestion, pre-processing, model training/fine-tuning, evaluation, and scalable inference. - Develop and productionize transformer-based models (instruction-tuning, LoRA, quantization) using PyTorch/TensorFlow and Hugging Face tooling. - Architect and implement RAG pipelines integrating vector databases (FAISS/Milvus/Chroma), dense / sparse retrieval, and scalable embedding workflows. - Optimize inference throughput and latency using ONNX/TorchScript/TensorRT, autoscaling, and cost-efficient deployment patterns on cloud infra. - Define MLOps best practices : CI/CD for models, containerization, observability, automated retraining, drift detection and rollout strategies. - Mentor engineers, conduct code reviews, and collaborate with product & data science to translate research into reliable production systems. Skills & Qualifications : Must-Have : - 7+ years software/ML engineering experience with significant time on generative/LLM projects. - Strong proficiency in Python and deep learning frameworks (PyTorch preferred; TensorFlow acceptable). - Hands-on experience with Hugging Face Transformers, tokenizers, training and fine-tuning workflows. - Proven experience building RAG systems and working with vector stores (FAISS, Milvus, Chroma) and embedding pipelines. - Experience deploying models to production using Docker, Kubernetes, and cloud services (AWS/GCP/Azure). - Solid software engineering practices : unit testing, CI/CD, code reviews, and monitoring for ML systems. Preferred : - Experience with model compression/acceleration (quantization, distillation), ONNX, or TensorRT. - Familiarity with LangChain or similar orchestration frameworks, agentic workflows, and tool-calling patterns. - Background in prompt engineering, instruction tuning, Reinforcement Learning from Human Feedback (RLHF) exposure. - Knowledge of data privacy, secure model serving, and compliance controls for enterprise deployments. Benefits & Culture Highlights : - Fully remote, India-based role with flexible hours and a results-oriented culture. - Opportunity to shape product architecture and scale cutting-edge generative AI for enterprise customers. - Collaborative environment with senior ML engineers, data scientists, and product stakeholdersmentorship and career growth.
Posted 4 days ago
5.0 - 8.0 years
25 - 40 Lacs
pune, gurugram, bengaluru
Hybrid
Salary: 25 to 40 LPA Exp: 5 to 10 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |