Jobs
Interviews

4894 Data Processing Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Scientist specializing in Generative AI LLMs, you will be part of our dynamic team, bringing your expertise in AI, machine learning, and deep learning to solve complex business challenges. With a focus on generative models, you will leverage your 2+ years of experience in the field to contribute to innovative solutions. Your responsibilities will include working with generative models such as GANS, VAEs, and transformer-based models like GPT-3/4, BERT, DALL.E. You will apply your understanding of model fine-tuning, transfer learning, and prompt engineering within the context of large language models (LLMs). Additionally, your knowledge of reinforcement learning and advanced machine learning techniques will be essential for tackling generative tasks effectively. Proficiency in Python programming and familiarity with relevant libraries and frameworks will enable you to implement your solutions effectively. Your proven experience in document detail extraction, feature engineering, data processing, and manipulation techniques will be key in developing innovative data applications using tools like Streamlit. Moreover, your advanced expertise in prompt engineering, chain of thought processes, and AI agents will set you apart in this role. Your strong problem-solving skills and ability to collaborate effectively in a team environment will be crucial for success. Furthermore, your excellent communication skills will allow you to convey complex technical concepts to non-technical stakeholders with clarity. If you are passionate about AI, machine learning, and deep learning, and eager to contribute to cutting-edge projects, we invite you to join us as a Data Scientist specializing in Generative AI LLMs.,

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

gujarat

On-site

As an MBA Graduate based in Nadiad, Gujarat, you will be responsible for performing market research and gathering and processing research data. In addition to these tasks, you will also be expected to handle basic administrative duties such as printing, sending emails, and ordering office supplies. Your role will involve assisting and coordinating with the sales team, as well as supporting the Front Office team. You will also be involved in inventory control and organizing staff meetings while updating calendars. Processing company receipts, invoices, and bills will also be part of your responsibilities. Furthermore, as part of the team, you will provide assistance and support to the management in various tasks. This is a full-time, permanent position based in Nadiad, Gujarat, with a day shift schedule and the opportunity for a yearly bonus. The work location will be in person. (Note: Job Types, Schedule, and Work Location details have been omitted as per the instructions),

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

About Founding Minds: Founding Minds is a renowned product development partner within the software industry, collaborating with clients worldwide to create innovative products. Serving as an incubator for numerous startups, we offer a dynamic environment where you can unleash your creativity and expertise. As a valued contributor at Founding Minds, you will engage with a diverse range of ideas, collaborate with talented individuals, broaden your perspective, explore various technologies, conduct in-depth research, excel in project contributions, and embrace full ownership of your tasks. If you possess a fervent passion for your craft, Founding Minds provides limitless opportunities for professional growth and advancement. Job Summary: We are currently looking for a proactive and seasoned Senior Quantitative Analyst to join our expanding team. This role necessitates exceptional analytical prowess, a background in healthcare or life sciences, and the capacity to guide and mentor two junior data analysts. Key Responsibilities: - Supervise and lead two data analysts to ensure excellence and uniformity in all quantitative research endeavors. - Engage in all stages of quantitative research, from crafting questionnaires and programming surveys to processing data, conducting analysis, and presenting findings. - Take charge of data management, overseeing datasets from intake to final output to uphold data integrity, precision, and accessibility. - Utilize advanced analytical methodologies like segmentation, key driver analysis, TURF, conjoint, and MaxDiff to derive actionable insights. - Collaborate closely with internal teams and clients to establish project objectives and develop appropriate research strategies. - Uphold and enhance documentation standards for consistent data procedures and deliverables. Minimum Qualifications: - A Bachelor's or Master's degree in health sciences, life sciences, public health, biostatistics, or a related discipline. - Minimum of 2 years of experience in healthcare or pharmaceutical market research (prior consulting experience is highly desirable). - Demonstrated capability to manage multiple projects and meet client-driven deadlines in a fast-paced setting. - Proficiency in handling pharmaceutical datasets and familiarity with healthcare terminology and principles. - Strong expertise in Excel (including pivot tables, advanced formulas, data modeling) and PowerPoint; experience with statistical tools like SPSS, R, or Python is advantageous. - Exceptional verbal and written communication abilities. - Flexibility to work with a 50% overlap with US East Coast business hours. - Ability to interact with clients professionally and represent projects effectively.,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 11 Lacs

mumbai

Work from Office

Job Description: Please find below JD for your reference: Lead the design and implementation of complex data solutions with a business-centric approach. Guide junior developers and provide technical mentorship. Ensure alignment of data architecture with marketing and business strategies. Work within Agile development teams, contributing to sprints and ceremonies. Design and implement CI/CD pipelines to support automated deployments and testing. Apply data engineering best practices to ensure scalable, maintainable codebases. Develop robust data pipelines and solutions using Python and SQL. Understand and manipulate business data to support marketing and audience targeting efforts. Collaborate with cross-functional teams to deliver data solutions that meet business needs. Communicate effectively with stakeholders to gather requirements and present solutions. Follow best practices for data processing and coding standards. Skills Proficient in Python for data manipulation and automation. Strong experience with SQL development (knowledge of MS SQL is a plus). Excellent written and oral communication skills. Deep understanding of business data, especially as it relates to marketing and audience targeting. Experience with Agile methodologies and CI/CD processes Experience with MS SQL. Familiarity with SAS. Good to have B2B and AWS knowledge Nice to have Hands-on experience with orchestration and automation tools such as Snowflake Tasks and Streams. Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 2 weeks ago

Apply

3.0 - 5.0 years

7 Lacs

bengaluru

Work from Office

About Tarento Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. Were proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. Primary Skill/Mandatory Skill/Must Have Data Pipeline Development: Design, build, and maintain scalable and reliable data pipelines using Apache Spark , Kafka , and Apache Flink for real-time data processing and large-scale batch data workflows. Real-time Data Streaming: Implement and manage real-time data streaming architectures leveraging Apache Kafka to process and transmit high volumes of streaming data in a fault-tolerant manner. Data Transformation and Orchestration: Develop data transformation workflows and integrate data from various sources while ensuring that pipelines are robust, efficient, and adhere to data engineering best practices . Good to have Data Quality Assurance: Implement data validation, quality checks, and monitoring systems to ensure data integrity and consistency across the entire data pipeline. Collaboration with Cross-functional Teams: Work closely with Data Scientists, Analysts, and other stakeholders to understand data requirements and provide reliable data infrastructure solutions. Performance Optimization: Continuously monitor and optimize data processing performance, focusing on scaling solutions and improving efficiency. Documentation & Best Practices: Maintain clear documentation for data pipelines, data structures, and processes. Advocate for industry-standard data engineering practices across the team. Tool Expertise: Leverage tools like Looker for Business Intelligence (BI) and BigQuery (BQ) for data warehousing to support analytics and decision-making processes.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

bengaluru

Work from Office

Company Overview CommerceIQ s AI-powered digital commerce platform is revolutionizing the way brands sell online. Our unified ecommerce management solutions empower brands to make smarter, faster decisions through insights that optimize the digital shelf, increase retail media ROI and fuel incremental sales across the world s largest marketplaces. With a global network of more than 900 retailers, our end-to-end platform helps 2,200+ of the world s leading brands streamline marketing, supply chain, and sales operations to profitably grow market share in more than 50 countries. Learn more at commerceiq.ai . We are seeking a highly skilled and experienced Generative AI Engineer to join our innovative team, with a paramount focus on developing and rigorously evaluating sophisticated multi-agent AI systems. This role is crucial for designing, building, deploying, and ensuring the accuracy and reliability of cutting-edge generative AI solutions that leverage collaborative AI agents. The ideal candidate will possess a deep understanding of generative models, combined with robust MLOps practices, strong back-end engineering skills in microservices architectures on cloud platforms like AWS or GCP, and an absolute mastery of Python, Langgraph, and Langchain. Proven experience with evaluation methodologies, including working with evaluation datasets and measuring the accuracy of multi-agent systems using tools like Langsmith or other open-source alternatives, is a must-have. Key Responsibilities: Generative AI Development & Multi-Agent Systems: Design, develop, and implement advanced generative AI models (LLMs) for various applications, from ideation to production. Build, and deploy intelligent multi-agent AI systems, enabling collaborative behaviors and complex decision-making workflows. Utilize and extend frameworks like Langchain and Langgraph extensively for building sophisticated, multi-step AI applications, intelligent agents, and agentic workflows, with a strong focus on their evaluability. Fine-tune and adapt pre-trained generative models to specific business needs and datasets, often as components within agentic systems. Develop strategies for prompt engineering and RAG (Retrieval Augmented Generation) to enhance model performance and control, particularly in multi-agent contexts. Research and stay abreast of the latest advancements in generative AI, natural language processing, multi-agent systems, and autonomous AI. Multi-Agent System Evaluation & Accuracy: Design, develop, and execute comprehensive evaluation strategies for multi-agent systems, defining key performance indicators (KPIs) and success metrics. Create, manage, and utilize high-quality evaluation datasets to rigorously test the accuracy, coherence, consistency, and robustness of multi-agent system outputs. Implement and leverage tools like Langsmith or other open-source solutions (e.g., TruLens, Ragas, custom frameworks) to trace agent interactions, analyze trajectories, and measure the accuracy and effectiveness of multi-agent system behavior. Perform root cause analysis for evaluation failures and drive iterative improvements to agent design and system performance. Develop methods for assessing inter-agent communication efficiency, task allocation accuracy, and collaborative problem-solving success. MLOps & Deployment: Establish and implement robust MLOps pipelines for training, evaluating, deploying, monitoring, and managing generative AI models and multi-agent systems in production environments. Ensure model and agent system scalability, reliability, and performance in a production setting. Implement version control for models, data, and code. Monitor model drift, performance degradation, and data quality, implementing proactive solutions for both individual models and interconnected agents. Back-end Engineering (Microservices on AWS/GCP): Develop and maintain highly scalable and resilient microservices to integrate generative AI models and orchestrate multi-agent systems into larger applications. Design and implement APIs for model inference and agent interaction and coordination. Deploy and manage microservices on cloud platforms such as AWS or GCP, utilizing services like EC2, S3, Lambda, EKS/ECS, Sagemaker, GCP Compute Engine, GCS, GKE, Vertex AI, etc., with a focus on supporting agentic architectures. Implement best practices for security, logging, monitoring, and error handling in microservices, especially concerning inter-agent communication and system resilience Required Qualifications: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field. 1-3 years of experience in software engineering with at least 1+ years focused on Machine Learning Engineering or Generative AI development. Demonstrable prior experience in multi-agent product development, including designing, implementing, and deploying systems with interacting AI agents. Mandatory experience in working with evaluation datasets, defining metrics, and assessing the accuracy and performance of multi-agent systems using tools like Langsmith or comparable open-source alternatives. Exceptional proficiency in Python and its ecosystem for machine learning (e.g., PyTorch, TensorFlow, Hugging Face Transformers). Deep expertise with Langgraph and Langchain for building complex LLM applications, intelligent agents, and orchestrating multi-agent workflows. Solid understanding and practical experience with various generative AI models (LLMs) Proven experience with MLOps principles and tools (e.g., MLflow, Kubeflow, Data Version Control (DVC), CI/CD for ML), with an emphasis on agent system lifecycle management and continuous evaluation. Extensive experience designing, developing, and deploying microservices architectures on either AWS or GCP. Proficiency with containerization technologies (Docker) and orchestration (Kubernetes). Strong understanding of API design and development (RESTful, gRPC). Excellent problem-solving skills, with a focus on building robust, scalable, and maintainable solutions. Strong communication and collaboration skills. Preferred Qualifications: Experience with Apache Spark for large-scale data processing Experience with specific AWS services (e.g., Sagemaker, Lambda, EKS) or GCP services (e.g., Vertex AI, GKE, Cloud Functions) for deploying and managing agentic systems. Familiarity with other distributed computing frameworks. Contributions to open-source projects in the AI/ML space, especially those related to multi-agent systems or agent frameworks (e.g., AutoGen, CrewAI). Experience with real-time inference for generative models and real-time agent decision-making and evaluation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status or any other category prohibited by applicable law.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

25 - 30 Lacs

hyderabad

Work from Office

Job Summary: We are seeking an accomplished Machine Learning Architect with 12 to 18 years of experience to lead the design and implementation of robust, scalable, and production-grade ML systems in the financial services domain . The ideal candidate will have deep expertise across the entire ML lifecycle — from data ingestion and modeling to deployment, serving, and operationalization — and a strong understanding of enterprise-level architecture and compliance considerations in the financial industry. As an ML Architect, you will work cross-functionally with engineering, data science, product, and compliance teams to design end-to-end machine learning platforms and pipelines that power intelligent financial applications, such as fraud detection, credit scoring, customer analytics, and risk management. Required Qualifications: 12 to 18 years of experience in software engineering, machine learning, or data platform architecture, with at least 5 years in architecting end-to-end ML solutions. Proven experience in the financial domain , working with use cases such as fraud detection, risk scoring, AML, churn prediction, or credit modeling. Deep knowledge of computer science fundamentals , including: Data structures and algorithms Distributed systems High-availability and low-latency system design Strong programming and architectural experience with Python , and at least one of Java/Scala/C++ . Expertise in ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and MLOps platforms (e.g., MLflow, Kubeflow, SageMaker, Vertex AI). Experience building and deploying RESTful APIs or microservices to serve ML models. Solid hands-on experience with data processing tools (e.g., Apache Spark, Kafka, Airflow) and cloud infrastructure (AWS/GCP/Azure). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes ) for deploying scalable systems. Preferred Qualifications: Prior experience in ML architecture within financial institutions , fintech, or regulatory environments. Working knowledge of governance and compliance frameworks: model auditability, explainable AI (XAI), fairness and bias detection. Experience designing real-time inference and streaming-based ML systems . Understanding of feature stores , model registries , and data lineage in ML pipelines. Strong communication and leadership skills, capable of influencing C-level stakeholders and guiding engineering decisions across departments. Technical Skills • 12+ years of experience in IT and relevant Machine learning experience • Strong understanding of software engineering principles and fundamentals including data structures and algorithms. • Excellent understanding of object-oriented concepts and Python. • Strong knowledge of computer science fundamentals to develop a scalable system • Experience in NLP models like BERT, Transformer architectures, etc. • Experience in leveraging Computer Vision and OCR in document extraction use cases • Familiarity with ML problems (ex, Classification/Regression/Anomaly Detection) • Python ML Packages (Scikit/Numpy/Pandas/OpenCV) • Exposure to REST API/ Flask concepts • Experience in deep learning package, Pytorch, Tensorflow etc • Familiarity with Graph database like Neo4j will be a plus. Roles and Responsibilities Architect and design scalable ML systems that cover the entire lifecycle: Data acquisition and preprocessing Model training, validation, and optimization Model deployment and API serving Model monitoring, drift detection, retraining, and governance Build and maintain modular, reusable, and compliant ML platforms and services , ensuring scalability, reliability, and performance for production environments. Define architecture and best practices for: Model versioning and reproducibility Feature engineering and feature stores MLOps workflows (CI/CD pipelines, model registry, deployment automation) Collaborate with data engineers, ML engineers, software architects, and domain experts to align technical design with business goals and regulatory requirements. Ensure systems meet financial industry standards for data privacy, explainability, auditability , and regulatory compliance (e.g., model risk management frameworks). Mentor engineering and data science teams on architectural patterns, system design principles, and ML operationalization. Evaluate and integrate new technologies and tools in ML infrastructure and cloud platforms.

Posted 2 weeks ago

Apply

1.0 - 9.0 years

9 - 13 Lacs

hyderabad

Work from Office

Career Category Information Systems Job Description Join Amgen s Mission of Serving Patients At Amgen, if you feel like you re part of something bigger, it s because you are. Our shared mission to serve patients living with serious illnesses drives all that we do. Since 1980, we ve helped pioneer the world of biotech in our fight against the world s toughest diseases. With our focus on four therapeutic areas Oncology, Inflammation, General Medicine, and Rare Disease we reach millions of patients each year. As a member of the Amgen team, you ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer - R&D Precision Medicine What you will do Let s do this. Let s change the world. In this vital role, you will be responsible for the end-to-end development of an enterprise analytics and data mastering solution using Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and impactful enterprise solutions that research cohort-building and advanced research pipeline. The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions, and be extraordinarily skilled with data analysis and profiling. You will collaborate closely with key customers, product team members, and related IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a good background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with key customers to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen s wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These solutions are pivotal tools in Amgen s goal to accelerate the speed of discovery, and speed to market of advanced precision medications. Basic Qualifications: Master s degree and 1 to 3 years of Data Engineering experience OR Bachelor s degree and 3 to 5 years of Data Engineering experience OR Diploma and 7 to 9 years of Data Engineering experience Minimum of 3 years of hands-on experience with BI solutions (Preferable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 3 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design, DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms (AWS), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Preferred Qualifications: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications : ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6. 0) Microsoft Certified: Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity The highest degree of initiative and self-motivation Strong verbal and written communication skills, including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, remote teams, specifically including using of tools and artifacts to assure clear and efficient collaboration across time zones Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers. amgen. com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. .

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

mumbai

Work from Office

Job Description: Please find below JD for your reference: Lead the design and implementation of complex data solutions with a business-centric approach. Guide junior developers and provide technical mentorship. Ensure alignment of data architecture with marketing and business strategies. Work within Agile development teams, contributing to sprints and ceremonies. Design and implement CI/CD pipelines to support automated deployments and testing. Apply data engineering best practices to ensure scalable, maintainable codebases. Develop robust data pipelines and solutions using Python and SQL. Understand and manipulate business data to support marketing and audience targeting efforts. Collaborate with cross-functional teams to deliver data solutions that meet business needs. Communicate effectively with stakeholders to gather requirements and present solutions. Follow best practices for data processing and coding standards. Skills Proficient in Python for data manipulation and automation. Strong experience with SQL development (knowledge of MS SQL is a plus). Excellent written and oral communication skills. Deep understanding of business data, especially as it relates to marketing and audience targeting. Experience with Agile methodologies and CI/CD processes Experience with MS SQL. Familiarity with SAS. Good to have B2B and AWS knowledge Nice to have Hands-on experience with orchestration and automation tools such as Snowflake Tasks and Streams. Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 2 weeks ago

Apply

3.0 - 5.0 years

8 - 9 Lacs

bengaluru

Work from Office

Job Description: Remote monitoring of Hydrogen electrolyzer throughout India and across different locations. Safe startup & safe shutdown of the hydrogen electrolyzer based on the customer requirements and involve in diagnostic assistance. Proactively Monitor the operational data obtained through our server and applications. Respond quickly to the call center and ticketing systems in support of customer needs. Assess customer and regional monitoring protocols to support compliant data processing. Alarm server monitoring. Develop several analytics to closely monitor the health and performance of the electrolyser. Prepare the Daily Production Report (DPR) and send it across for the budgeting. Prepare operational assessment reports that includes machine operational performance, trip analysis, structural health and sensor health. Responsible for Condition Monitoring of the system and actively taking decisions based on the current scenario. Safe isolation & restoration of equipment s for issuance of PTW (Permit to Work) to Maintenance Department with due attention to safety and efficient operation of plant and personnel. Raising of notifications in ERP for any observation made in the electrolyzer and do necessary modifications for system improvement. EDUCATION AND EXPERIENCE REQUIRED : 3-5 Years of experience working on any one of the monitoring software like SCADA, PLC, HMI, DCS is mandatory. Prefer candidates with Remote Monitoring / Condition Monitoring or Control room operations experience from industries like Chemical, Oil & Gas, Process, Power, Battery, Fuel cells etc. Remote monitoring of Hydrogen electrolyzer throughout India and across different locations. Ohmium is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Posted 2 weeks ago

Apply

4.0 - 5.0 years

5 - 8 Lacs

kolkata

Work from Office

Start Date: Immediate Joiners Preferred Required Skills Experience: 4-5 years of experience in Java development Core Skills: Java EE, Spring Boot, MVC architecture Additional: JPA2, Hibernate, ORM technologies, REST APIs Responsibilities Participate in full software development lifecycle (design, development, testing, deployment) Develop robust, scalable applications using Java EE, Spring Boot, and associated technologies Build RESTful web services and integrate backend systems Write clean, testable, and maintainable code Perform software analysis, debugging, and troubleshooting Collaborate with other developers, architects, and client-side stakeholders Contribute to application releases and deployment cycles Nice to Have Exposure to web frameworks (JSF, Struts, Servlets), cloud platforms like AWS, Hadoop or data processing pipelines, capability to support microservice-based architecture. Apply Now

Posted 2 weeks ago

Apply

4.0 - 5.0 years

4 - 8 Lacs

kolkata

Work from Office

Start Date: Immediate Joiners Preferred Required Skills Experience: 4-5 years of solid Python development Core Skills: Databricks, PySpark, Data Engineering Additional: SQL/NoSQL, Cloud (Azure/AWS), Git Responsibilities Develop and maintain scalable Python-based applications and data pipelines Use Databricks for data engineering, big data processing, and transformations Build and optimize ETL pipelines using PySpark Optionally integrate workflows using UiPath or Azure Data Factory (ADF) Collaborate with client-side business and technical teams to understand and implement data-driven solutions Optimize, troubleshoot, and scale existing code for performance improvements Contribute to system architecture and participate in regular stand-ups or reviews Nice to Have Experience with workflow automation (UiPath, ADF), exposure to AI/ML project environments, basic knowledge of FastAPI or microservices. Apply Now

Posted 2 weeks ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

kolkata, mumbai, new delhi

Work from Office

Process complex financial data Handle large-scale financial datasets from multiple sources, including banks, payment gateways and processors, ensuring data integrity across different formats and structures. Build and maintain complex pipelines Develop and optimise pipelines that apply intricate business rules, financial calculations and transformations for accurate transaction processing. Time-bound and event-driven processing Design event-driven architectures to meet strict SLAs for transaction processing, settlement and reconciliation with banks and payment partners. Enable reporting and AI insights Structure and prepare data for advanced analytics, reporting and AI-driven insights to improve payment success rates, detect fraud and optimise transaction flows. What youll bring Programming: Strong proficiency in Python and OOPS concepts, should be able to write modularise and structured code and should be an excellent problem solver. Big data technologies: Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink or similar tools for distributed data processing and real-time streaming. Cloud platforms: Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Data warehousing and modeling: Strong understanding of data warehousing concepts and data modeling principles. ETL frameworks: Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Data lakes and storage: Proficiency in working with data lakes and cloud-based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Version control: Expertise in Git for version control and collaborative coding. Handling Complex System: Expertise in working with complex systems and building complex system from scratch, should have good exposer in solving complex problems. Experience and requirements Bachelor s degree in Computer Science, Information Technology, or equivalent experience. 1-5 years of experience in data engineering, ETL development, or database management. Prior experience in cloud-based environments (e.g., AWS, GCP, Azure) is highly desirable. Proven experience working with complex systems and building complex system from scratch, with a focus on performance tuning and optimisation.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

chennai

Work from Office

Quapt Technologies is looking for SQL Developer (Data Engineering) to join our dynamic team and embark on a rewarding career journey Designing and implementing database structures, including tables, indexes, views, and stored proceduresWriting and testing SQL scripts, including complex queries and transactions, to support data analysis and application developmentMaintaining and optimizing existing database systems, troubleshooting performance issues and resolving data integrity problemsCollaborating with software developers, project managers, and other stakeholders to ensure that database designs meet business requirements and technical specificationsImplementing database security and access control measures, ensuring the confidentiality and protection of sensitive dataMonitoring database performance and scalability, and making recommendations for improvements Excellent communication and interpersonal skills

Posted 2 weeks ago

Apply

4.0 - 5.0 years

5 - 8 Lacs

kolkata

Work from Office

Required Skills Core: Java EE, Spring Boot, MVC architecture ORM: JPA2, Hibernate, Object-Oriented Design Patterns APIs: REST APIs, unit testing frameworks, Git Responsibilities Participate in full software development lifecycle (design, development, testing, deployment) Develop robust, scalable applications using Java EE, Spring Boot, and associated technologies Build RESTful web services and integrate backend systems Write clean, testable, and maintainable code Perform software analysis, debugging, and troubleshooting Collaborate with other developers, architects, and client-side stakeholders Nice to Have Exposure to web frameworks (JSF, Struts, Servlets), cloud platforms like AWS, Hadoop or data processing pipelines, microservice-based architecture support.

Posted 2 weeks ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

ahmedabad

Work from Office

The Global Procurement Executive oversees global sourcing, vendor development, and cost optimization strategies. Responsible for building a competitive supplier base, driving savings, and ensuring reliable, high-quality supply across international markets. Responsibilities: Responsibilities: Procure goods, materials, components, and services within specified cost and delivery timelines. Ensure continuous supply of required materials across all commodities (electronics, components, local/global, R&D). Manage end-to-end procurement processes for all categories. Resolve price, delivery, or invoice issues with suppliers and submit documents to finance. Process purchase requisitions and release POs within 2-3 working days for regular requirements, ensuring timely delivery. Gather technical and commercial information from suppliers before purchasing materials. Source and develop new/alternative vendors locally and internationally to improve lead times and pricing. Approve samples from R&D for new vendors. Collect market data on electronic components from suppliers. Support R&D with material arrangements for new projects within deadlines. Assist with product costing and share with HOD for approval. Identify new material sources to reduce costs and obtain samples for approval. Prepare necessary reports and maintain documentation. Qualifications: We are seeking candidates with a B.E/B.Tech/Diploma/B.Sc in Electronics & Communication, Electrical, Instrumentation or Power Electronics . Strong networking and presentation skills, and the ability to develop positive relationships with vendors, effective communication, analytical skills. Competencies: Strategic thinking and problem-solving skills. Ability to work independently and as part of a team. Strong negotiation and closing skills.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

12 - 16 Lacs

bengaluru

Work from Office

Infrastructure & Architecture Design and scale distributed systems for real-time speech processing. Own data pipelines from ingestion to model serving. Ensure security, privacy, and compliance for user speech data. Performance & Reliability Implement robust monitoring, logging, and alerting for 24x7 availability. Optimise system performance and scalability. Team Leadership & Collaboration Mentor junior engineers and work closely with AI/ML teams. Contribute to hiring efforts as we expand. Must-Have Requirements Experience : 6+ years in backend or platform engineering (focus on distributed systems). Cloud & Containers : Proficiency with Docker, Kubernetes, and AWS (or similar cloud platforms). Real-Time Streaming : Familiarity with audio/video streaming in high-availability, auto-scaling environments. Databases : Strong SQL skills (PostgreSQL preferred), including query optimization, schema design; caching layers (Redis, Memcached).

Posted 2 weeks ago

Apply

2.0 - 5.0 years

50 - 55 Lacs

bengaluru

Work from Office

---Infrastructure & Architecture Design and scale distributed systems for real-time speech processing. Own data pipelines from ingestion to model serving. Ensure security, privacy, and compliance for user speech data. --- Performance & Reliability Implement robust monitoring, logging, and alerting for 24x7 availability. Optimize system performance and scalability. Must-Have Requirements Experience : 2-5+ years in backend or platform engineering (focus on distributed systems). Cloud & Containers : Proficiency with Kubernetes and AWS (or similar cloud platforms). Real-Time Streaming : Familiarity with audio/video streaming in high-availability, auto-scaling environments. Databases : Strong SQL skills (PostgreSQL preferred), including query optimization, schema design; caching layers (Redis, Memcached).

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

chennai

Work from Office

Design and manage scalable data pipelines and ETL/ELT processes using Azure Data Factory, Synapse Analytics, and emerging platforms such as Microsoft Fabric and Databricks. Collaborate with peers to create and maintain data models and databases in Azure SQL DB and Azure Data Lake. Ensure data quality, lineage, and availability through rigorous validation, testing, and monitoring practices. Integrate data from ETRM systems, business applications, refinery control systems, and retail station point-of-sale networks. Implement real-time analytics solutions using Azure Event Hubs, Stream Analytics, and IoT Hub. Liaise with multiple functional groups across Glencore Group, the Oil Department and the industrial assets, to provision and deploy infrastructure within Azure. Implement data governance and security policies using Microsoft Purview and Azure RBAC. Ensure data quality, lineage, and availability for business-critical applications. Deliver high-velocity solutions supported by strong coding practices and automation. Build and maintain multiple pipelines in parallel, managing context switching effectively. Stay current with modern data technologies and trends, including AI/ML, and advise on their responsible adoption. Operate independently, driving individual workstreams while contributing to team-wide initiatives COMPETENCIES Core Competencies Bachelor s degree in Computer Science, Information Technology, or equivalent experience. Data Engineering and Modeling: Strong grasp of core data modeling concepts and techniques, experience designing and managing ETL/ELT pipelines using Azure Data Factory and Synapse Analytics. Programming Languages: Expert proficiency in SQL, Python, and PySpark for data transformation, validation, and analytics. Cloud Technologies (Azure): Azure Data Factory (ADF) Azure Synapse Analytics Azure Databricks Azure Data Lake Storage Gen2 Azure SQL Azure Functions Version Control and CI/CD: Experience with Git/GitHub and CI/CD pipelines using GitHub Actions.

Posted 2 weeks ago

Apply

0.0 - 2.0 years

1 - 4 Lacs

gurugram

Work from Office

Develop and maintain client-ready visualization dashboards using tools like Tableau with some support from senior team members Support data preparation and analysis using tools such as Alteryx or SQL Assist in the execution of benchmarking surveys, including developing sample specifications, and managing research data Provide ongoing support for subscription customers, including generating custom data cuts, answering methodology questions, and preparing insights for delivery Collaborate with team members to ensure high-quality, timely outputs across product operations workstreams Translate technical concepts and data into clear and structured insights for internal and external stakeholders with support from senior team members Contribute to developing reusable templates, process documentation, and standard work practices that improve efficiency and scalability Demonstrate curiosity and hypothesis-driven thinking to uncover patterns, and build robust analytical models which can help solve problem at hand

Posted 2 weeks ago

Apply

12.0 - 15.0 years

8 - 11 Lacs

gurugram

Work from Office

Passion for visualizing hidden relationships in technical data with ProSight Must demonstrate leadership to respond to customers seeking an urgent resolution to sometimes ambiguous technical needs quickly gain the Customers trust and show accountability for problem resolution. Excellent verbal, written, interpersonal communication & effective presentation skills English language (oral and written)

Posted 2 weeks ago

Apply

3.0 - 7.0 years

20 - 25 Lacs

bengaluru

Work from Office

Develop and maintain robust ETL/ELT pipelines using Python, SQL, and PySpark. Work with on-premise big data platforms such as Spark, Hadoop, Hive, and HDFS. Optimize and troubleshoot workflows to ensure performance, reliability, and quality. Use AI tools to assist with code generation, testing, debugging, and documentation. Collaborate with data scientists, analysts, and engineers to support data-driven use cases. Maintain up-to-date documentation using AI summarization tools. Apply AI-augmented software engineering practices, including automated testing, code reviews, and CI/CD. Identify opportunities for automation and process improvement across the data lifecycle. Qualifications 1 3 years of hands-on experience as a Data Engineer or in a similar data-focused engineering role. Proficiency in Python for data manipulation, automation, and scripting.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

bengaluru

Work from Office

Design, build, and maintain on-prem data pipelines to ingest, process, and transform large volumes of data from multiple sources into data warehouses and data lakes Develop and optimize Scala-Spark and SQL jobs for high-performance batch and real-time data processing Ensure the scalability, reliability, and performance of data infrastructure in an on-prem setup Collaborate with data scientists, analysts, and business teams to translate their data requirements into technical solutions Troubleshoot and resolve issues in data pipelines and data processing workflows Monitor, tune, and improve Hadoop clusters and data jobs for cost and resource efficiency Stay current with on-prem big data technology trends and suggest enhancements to improve data engineering capabilities Qualifications Bachelors degree in software engineering, or a related field 5+ years of experience in data engineering or a related domain Strong p

Posted 2 weeks ago

Apply

0.0 - 3.0 years

1 - 3 Lacs

agra

Work from Office

Manage and update office records, files, and databases Assist with document processing and preparation of reports Handle communications and coordinate between departments Support administrative tasks such as scheduling meetings, organizing files, and data entry Ensure the accuracy and organization of all paperwork and office systems Assist in handling customer queries and support tasks as needed Help manage office supplies and equipment

Posted 2 weeks ago

Apply

1.0 - 5.0 years

1 - 3 Lacs

surat

Work from Office

Location: Surat and ready to shift Surat Job Summary: Responsible for accurate entry, updating, and maintenance of Item Bill of Material data in the ERP/system, ensuring completeness and correctness to support smooth production and inventory management processes. Key Responsibilities: Enter, update, and verify BOM data in ERP/system as per approved specifications. Ensure accuracy in linking item codes, descriptions, quantities, and components. Coordinate with Design, Production, and Stores teams for BOM-related clarifications. Maintain organized digital and physical records of BOM data. Assist in preparing reports and data extracts for analysis. Qualifications & Skills: Minimum 12th pass or Graduate in any discipline. Basic knowledge of BOM / manufacturing process preferred. Proficient in MS Excel, ERP, and data entry tools. High accuracy, attention to detail, and speed in typing. Good communication and coordination skills. Experience: 02 years in data entry or documentation (manufacturing background preferred). Job Type: Full-time Benefits: Food provided Health insurance Leave encashment Paid sick time Paid time off Provident Fund

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies