Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be working as an IoT Solution Architect at IENERGY, a leading provider of EHS and ESG software solutions. Your primary responsibility will involve designing and implementing IoT solutions, providing consulting services, and ensuring seamless integration with existing systems. You will collaborate with cross-functional teams to develop software solutions, manage business processes, and optimize IoT architecture for improved performance and efficiency. To excel in this role, you should have proficiency in Solution Architecture and Integration, along with experience in Consulting, Business Process, and Software Development. Additionally, you should possess 2-3 years of experience in setting up MQTT protocol enabled IoT devices like GPS and Sensors, as well as expertise in Kafka in a live data handling environment. Experience in setting up data pipelines will also be beneficial for this position.,
Posted 1 week ago
10.0 - 16.0 years
25 - 40 Lacs
Gurugram, Bengaluru, Delhi / NCR
Work from Office
Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moodys Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities 1. Strategy & Planning o Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. o Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. o Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. o Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. o Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. o Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. o Ensure that data strategies and architectures are aligned with regulatory compliance. o Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. o Ensure effective data management throughout the project lifecycle. 2. Acquisition & Deployment o Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals 3. Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. 4. Data Architecture Design: o Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. o Design and implement scalable, high-performance data solutions that meet business requirements. 5. Data Governance: o Establish and enforce data governance policies and procedures as agreed with stakeholders. o Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. 6. Data Migration: o Oversee the data migration process from legacy systems to the new systems being put in place. o Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. 7. Master Data Management: o Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. o Provide data management (create, update and delimit) methods to ensure master data is governed 8. Stakeholder Collaboration: o Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. o Ensure the enterprise system meets the organization's data needs. 9. Training and Support: o Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. o Promote user adoption and proper use of data. 10 Data Quality Assurance: o Implement data quality assurance measures to identify and correct data issues. o Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. 11. Reporting and Analytics: o Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems o Enable data-driven decision-making through robust data analysis. 1. Continuous Improvement: o Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. o Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. • Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) o SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: • Education: o Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral • A self-starter, an excellent planner and executor and above all, a good team player • Excellent communication skills and inter-personal skills are a must • Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines • Ability to build collaborative relationships and effectively leverage networks to mobilize resources • Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements
Posted 1 week ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Hybrid
We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a strong background in designing, building, and optimizing data pipelines and architectures to support our growing data-driven initiatives. Knowledge of machine learning techniques and frameworks is a significant advantage and will allow you to collaborate closely with our data science team. Key Responsibilities: - Design, implement, and maintain scalable data pipelines for collecting, processing, and analyzing large datasets. - Build and optimize data architectures to support business intelligence, analytics, and machine learning models. - Collaborate with data scientists, analysts, and software engineers to ensure seamless data integration and accessibility. - Develop and maintain ETL (Extract, Transform, Load) workflows and tools. - Monitor and troubleshoot data systems to ensure high availability and performance. - Implement and enforce best practices for data security, governance, and quality. - Evaluate and integrate new technologies to enhance data engineering capabilities. Qualifications: - Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. - Proven experience as a Data Engineer or in a similar role. - Proficiency in programming languages such as Python, Java, or Scala. - Hands-on experience with data pipeline tools (e.g., Apache Airflow, AWS Glue). - Strong knowledge of SQL and database systems (e.g., PostgreSQL, MySQL, MongoDB). - Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark). - Familiarity with data modeling, schema design, and data warehousing concepts. - Understanding of CI/CD pipelines and version control systems like Git. Preferred Skills: - Familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn). - Experience deploying machine learning models and working with MLOps tools. - Knowledge of distributed systems and real-time data processing (e.g., Kafka, Flink).
Posted 1 week ago
4.0 - 7.0 years
18 - 20 Lacs
Pune
Hybrid
Job Title: GCP Data Engineer Location: Pune, India Experience: 4 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 4 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Interested candidates can send your your resume on pranitathapa@onixnet.com
Posted 1 week ago
7.0 - 10.0 years
20 - 27 Lacs
Noida
Work from Office
Job Responsibilities: Technical Leadership: • Provide technical leadership and mentorship to a team of data engineers. • Design, architect, and implement highly scalable, resilient, and performant data pipelines, using GCP technologies is a plus (e.g., Dataproc, Cloud Composer, Pub/Sub, BigQuery). • Guide the team in adopting best practices for data engineering, including CI/CD, infrastructure-as-code, and automated testing. • Conduct code reviews, design reviews, and provide constructive feedback to team members. • Stay up-to-date with the latest technologies and trends in data engineering, Data Pipeline Development: • Develop and maintain robust and efficient data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources. • Implement data quality checks and monitoring systems to ensure data accuracy and integrity. • Collaborate with cross functional teams, and business stakeholders to understand data requirements and deliver data solutions that meet their needs. Platform Building & Maintenance: • Design and implement secure and scalable data storage solutions • Manage and optimize cloud infrastructure costs related to data engineering workloads. • Contribute to the development and maintenance of data engineering tooling and infrastructure to improve team productivity and efficiency. Collaboration & Communication: • Effectively communicate technical designs and concepts to both technical and non-technical audiences. • Collaborate effectively with other engineering teams, product managers, and business stakeholders. • Contribute to knowledge sharing within the team and across the organization. Required Qualifications: • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 7+ years of experience in data engineering and Software Development. • 7+ years of experience of coding in SQL and Python/Java. • 3+ years of hands-on experience building and managing data pipelines in cloud environment like GCP. • Strong programming skills in Python or Java, with experience in developing data-intensive applications. • Expertise in SQL and data modeling techniques for both transactional and analytical workloads. • Experience with CI/CD pipelines and automated testing frameworks. • Excellent communication, interpersonal, and problem-solving skills. • Experience leading or mentoring a team of engineers Roles and Responsibilities Job Responsibilities: Technical Leadership: • Provide technical leadership and mentorship to a team of data engineers. • Design, architect, and implement highly scalable, resilient, and performant data pipelines, using GCP technologies is a plus (e.g., Dataproc, Cloud Composer, Pub/Sub, BigQuery). • Guide the team in adopting best practices for data engineering, including CI/CD, infrastructure-as-code, and automated testing. • Conduct code reviews, design reviews, and provide constructive feedback to team members. • Stay up-to-date with the latest technologies and trends in data engineering, Data Pipeline Development: • Develop and maintain robust and efficient data pipelines to ingest, process, and transform large volumes of structured and unstructured data from various sources. • Implement data quality checks and monitoring systems to ensure data accuracy and integrity. • Collaborate with cross functional teams, and business stakeholders to understand data requirements and deliver data solutions that meet their needs. Platform Building & Maintenance: • Design and implement secure and scalable data storage solutions • Manage and optimize cloud infrastructure costs related to data engineering workloads. • Contribute to the development and maintenance of data engineering tooling and infrastructure to improve team productivity and efficiency. Collaboration & Communication: • Effectively communicate technical designs and concepts to both technical and non-technical audiences. • Collaborate effectively with other engineering teams, product managers, and business stakeholders. • Contribute to knowledge sharing within the team and across the organization. Required Qualifications: • Bachelor's or Master's degree in Computer Science, Engineering, or a related field. • 7+ years of experience in data engineering and Software Development. • 7+ years of experience of coding in SQL and Python/Java. • 3+ years of hands-on experience building and managing data pipelines in cloud environment like GCP. • Strong programming skills in Python or Java, with experience in developing data-intensive applications. • Expertise in SQL and data modeling techniques for both transactional and analytical workloads. • Experience with CI/CD pipelines and automated testing frameworks. • Excellent communication, interpersonal, and problem-solving skills. • Experience leading or mentoring a team of engineers
Posted 1 week ago
10.0 - 17.0 years
22 - 37 Lacs
Pune
Hybrid
Hi, Greetings from Peoplefy Infosolutions !!! We are hiring for one of our reputed MNC client based in Pune. We are looking for candidates with 10+ years of experience who is currently working as a Data Architect. Job Description: We are seeking a highly skilled and experienced Cloud Data Architect to design, implement, and manage scalable, secure, and efficient cloud-based data solutions. The ideal candidate will possess a strong combination of technical expertise, analytical skills, and the ability to collaborate effectively with cross-functional teams to translate business requirements into technical solutions. Key Responsibilities: Design and implement data architectures, including data pipelines, data lakes, and data warehouses, on cloud platforms. Develop and optimize data models (e.g., star schema, snowflake schema) to support business intelligence and analytics. Leverage big data technologies (e.g., Hadoop, Spark, Kafka) to process and analyze large-scale datasets. Manage and optimize relational and NoSQL databases for performance and scalability. Develop and maintain ETL/ELT workflows using tools like Apache NiFi, Talend, or Informatica. Ensure data security and compliance with regulations such as GDPR and CCPA. Automate infrastructure deployment using CI/CD pipelines and Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Collaborate with analytics teams to integrate machine learning frameworks and visualization tools (e.g., Tableau, Power BI). Provide technical leadership and mentorship to team members. Interested candidates for above position kindly share your CVs on sneh.ne@peoplefy.com with below details - Experience : CTC : Expected CTC : Notice Period : Location :
Posted 1 week ago
4.0 - 8.0 years
20 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You will need hands-on experience in converting SAS to Python, along with strong mathematics and statistics skills to excel in this role. Additionally, proficiency in AI-specific utilities like ChatGPT, Hugging Face Transformers, etc., is essential. Your ability to comprehend business requirements and derive use cases and solutions from structured/unstructured data will be crucial. Your responsibilities will include converting SAS to Python code, acquiring the necessary skills for building and deploying machine learning models for production, and engaging in feature engineering, exploratory data analysis, pipeline creation, model training, and hyperparameter tuning. You will also be expected to develop and deploy cloud-based applications, such as LLM/GenAI, into production. Proficiency in programming languages like SAS, Python, Scikit-Learn, TensorFlow, PyTorch, Keras, and expertise in exploratory data analysis, machine learning, deep learning algorithms, model building, hyperparameter tuning, and model performance metrics are required. Knowledge of MLOps, data pipelines, data engineering, statistics, time series modeling, forecasting, image/video analytics, and natural language processing (NLP) will also be advantageous. Experience with ML services from cloud platforms like AWS, GCP, Azure, and Databricks is preferred. Basic knowledge of Databricks and Big Data, including Spark and Hive, is considered an asset. If you are looking to join a dynamic team and work in locations such as Indore, Noida, Gurgaon, Bangalore, or Pune, please share your profiles with details on notice period and compensation.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You should have 5+ years of experience in core Java and the Spring Framework. Additionally, you must have at least 2 years of experience in Cloud technologies such as GCP, AWS, or Azure, with a preference for GCP. It is required to have experience in big data processing on a distributed system and in working with databases including RDBMS, NoSQL databases, and Cloud natives. You should also have expertise in handling various data formats like Flat file, JSON, Avro, XML, etc., including defining schemas and contracts. Furthermore, you should have experience in implementing data pipelines (ETL) using Dataflow (Apache Beam) and in working with Microservices and integration patterns of APIs with data processing. Experience in data structure, defining, and designing data models will be beneficial for this role.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
The successful candidate for the Full Stack Developer position at USP will demonstrate an understanding of the organization's mission and a commitment to excellence through inclusive and equitable behaviors. As a key member of the Digital & Innovation group, you will be responsible for developing innovative digital products using cutting-edge cloud technologies. Your role will involve building scalable applications and platforms, ensuring compliance with governance principles and security policies, participating in code reviews and agile development processes, and providing technical guidance to junior developers. Effective communication of technical designs and solutions to both technical and non-technical stakeholders will be essential. To qualify for this position, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with 6-10 years of software development experience focusing on cloud computing. Proficiency in cloud platforms such as AWS, Azure, and Google Cloud, as well as expertise in Java spring boot applications, Python, Node.js, and front-end technologies like React.js/Node.js, is required. Experience with AWS/Azure services, containerization technologies like Docker and Kubernetes, data pipeline tools, and microservices architecture is also necessary. Additionally, familiarity with cloud architecture patterns, security principles, and automated testing practices will be beneficial. Desired preferences for this role include experience in scientific chemistry nomenclature or life sciences, pharmaceutical datasets and nomenclature, knowledge graphs, and the ability to explain complex technical issues to a non-technical audience. Strong analytical, problem-solving, and communication skills are essential, as well as the ability to manage multiple projects in a dynamic environment and make tough decisions when necessary. You will be expected to work independently on most deliverables, lead initiatives for continuous improvement, and prioritize tasks effectively. As an employee of USP, you will have access to a comprehensive benefits package designed to protect your personal and financial well-being. USP is an independent scientific organization dedicated to developing quality standards for medicines, dietary supplements, and food ingredients in collaboration with global health and science authorities. With a core value of Passion for Quality, USP strives to strengthen the supply of safe, quality medicines and supplements worldwide through the efforts of over 1,300 professionals across twenty global locations. Inclusivity, mentorship opportunities, and professional growth are valued at USP, where Diversity, Equity, Inclusion, and Belonging are central to creating a world where quality healthcare is accessible to all.,
Posted 1 week ago
4.0 - 9.0 years
16 - 25 Lacs
Navi Mumbai, Bengaluru, Mumbai (All Areas)
Hybrid
Role & responsibilities: Design and implement scalable data pipelines for feature extraction, transformation, and loading (ETL) using technologies such as Pyspark, Scala, and relevant big data frameworks. Govern and optimize data pipelines to ensure high reliability, efficiency, and data quality across on-premise and cloud environments. Collaborate closely with data scientists, ML engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Implement best practices for data governance, metadata management, and compliance with regulatory requirements. Lead a team of data engineers, providing technical guidance, mentorship, and fostering a culture of innovation and collaboration. Stay updated with industry trends and advancements in big data technologies and contribute to the continuous improvement of our data engineering practices. Preferred candidate profile Strong experience in data engineering with hands-on experience in designing and implementing data pipelines. Strong proficiency in programming languages such as Pyspark and Scala, with experience in big data technologies (Cloudera, Hadoop ecosystem). Proven leadership experience in managing and mentoring a team of data engineers. Experience working in a banking or financial services environment is a plus. Excellent communication skills with the ability to collaborate effectively across teams and stakeholders.
Posted 1 week ago
6.0 - 9.0 years
25 - 32 Lacs
Bangalore/Bengaluru
Work from Office
Full time with top German MNC for location Bangalore - Experience on SCALA/Java is a must Job Description As a Data engineer in our team, you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient, scalable & data-driven applications. The data sets produced by these applications whether data streams or data at rest need to be highly available, reliable, consistent and quality-assured so that they can serve as input to wide range of other use cases and downstream applications. We run these applications on Azure databricks, you will be building applications, you will also contribute to scaling the platform including topics such as automation and observability. Finally, you are expected to interact with customers and other technical teams e.g. for requirements clarification & definition of data models. Primary responsibilities: • Be a key contributor to the Org hybrid cloud data platform (on-prem & cloud) Designing & building data pipelines on a global scale, ranging from small to huge datasets Design applications and data models based on deep business understanding and customer requirements Directly work with architects and technical leadership to design & implement applications and / or architectural components Architectural proposal and estimation for the application, technical leadership to the team Coordination/Collaboration with central teams for tasks and standards Develop data integration workflow in Azure Developing streaming application using scala. Integrating the end-to-end Azure Databricks pipeline to take data from source systems to target system ensuring the quality and consistency of data. Defining data quality and validation checks. Configuring data processing and transformation. Writing unit test cases for data pipelines. Defining and implementing data quality and validation check. Tuning pipeline configurations for optimal performance. Participate in Peer review and PR review for the code written by team members Qualifications Bachelors degree in computer science, Computer Engineering, relevant technical field, or equivalent; Master’s degree preferred. Additional Information Skills Based on deep technical expertise, capable of working directly with architects and technical leadership Able to guide junior team members in technical questions related to architecture or software & system design Self-starter and empowered professional with strong execution and communication capabilities Proactive mindset: identify and start work independently, challenges status quo, accepts being challenged Outstanding written and verbal communication skills. Key Competencies: 6+ years’ experience in data engineering, ETL tools and working with large data sets. Minimum 5 years of working experience of distributed cluster. At least 5 years of experience in Scala/Java software development. At least 2-3 years of Azure Databricks Cloud experience in Data Engineering Experience of Delta table, ADLS, DBFS, ADF. Deep level of understanding in distributed systems for data storage and processing (e.g. Kafka, Spark, Azure Cloud) Experience with Cloud based SQL Database: Azure SQL Editor Excellent software engineering skills (i.e., data structures, algorithms, software design). Excellent problem-solving, investigative, and troubleshooting skills Experience with CI/CD tools such as Jenkins and Github Ability to work independently. Soft Skills: Good Communication Skills Ability to coach and Guide young Data Engineers Decent Level in English as Business Language
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Hyderabad, Bengaluru
Hybrid
Job Role: Backend and Data Pipeline Engineer Location: Hyderabad/Bangalore(Hybrid) Job Type : Fulltime **Immediate Joiners 0-15 days** Job Summary: The Team: Were investing in technology to develop new products that help our customers drive their growth and transformation agenda. These include new data integration, advanced analytics, and modern applications that address new customer needs and are highly visible and strategic within the organization. Do you love building products on platforms at scale while leveraging cutting edge technology? Do you want to deliver innovative solutions to complex problems? If so, be part of our mighty team of engineers and play a key role in driving our business strategies. The Impact: We stand at cross-roads of innovation through Data Products to bring a competitive advantage to our business through the delivery of automotive forecasting solutions. Your work will contribute to the growth and success of our organization and provide valuable insights to our clients. Whats in it for you: We are looking for an innovative and mission-driven software\data engineer to make a significant impact by designing and developing AWS cloud native solutions that enables analysts to forecast long and short-term trends in the automotive industry. This role requires cutting edge data and cloud native technical expertise as well as the ability to work independently in a fast-paced, collaborative, and dynamic work environment. Responsibilities: Design, develop, and maintain scalable data pipelines including complex algorithms Build and maintain UI backend services using Python or C# or similar, ensuring responsiveness and high performance Ensure data quality and integrity through robust validation processes Strong understanding of data integration and data modeling concepts Lead data integration projects and mentor junior engineers Collaborate with cross-functional teams to gather data requirements Collaborate with data scientists and analysts to optimize data flow and storage for advanced analytics Take ownership of the modules you work on, deliver on time and with quality, ensure software development best practices Utilize Redis for caching and data storage solutions to enhance application performance. What Were Looking For : Bachelors degree in computer science, or a related field. Strong analytical and problem-solving skills. 7+ years of experience in Data Engineering/Advanced Analytics Proficiency in Python and experience with Flask for backend development. Strong knowledge of object-oriented programming. AWS Proficiency is a big plus: ECR, Containers
Posted 1 week ago
4.0 - 8.0 years
10 - 20 Lacs
Kolkata
Remote
We are seeking a highly skilled and experienced Data Engineer to join our dynamic data team. The ideal candidate will have deep expertise in Snowflake, dbt (Data Build Tool), and Python, with a strong understanding of data architecture, transformation pipelines, and data quality principles. You will be instrumental in building and maintaining scalable data pipelines and enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain scalable and efficient ETL/ELT pipelines using dbt, Snowflake, and Python. Optimize data models and warehouse performance in Snowflake. Collaborate with data analysts, scientists, and business teams to understand data needs and deliver high-quality datasets. Ensure data quality, governance, and compliance across pipelines. Automate data workflows and monitor production jobs to ensure accuracy and reliability. Participate in architectural decisions and advocate for best practices in data engineering. Maintain documentation of data pipelines, transformations, and data models. Mentor junior engineers and contribute to team knowledge sharing. Required Skills & Qualifications: 4+ years of professional experience in Data Engineering. Strong hands-on experience with Snowflake (data modelling, performance tuning, security features). Proven experience using dbt for data transformation and modeling. Proficiency in Python for data engineering tasks and scripting. Solid understanding of SQL and experience in building and maintaining complex queries. Experience with orchestration tools (e.g., Airflow, Prefect) is a plus. Familiarity with version control systems like Git. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications: Experience working with cloud platforms like AWS, Azure, or GCP. Knowledge of data lake architecture and real-time streaming technologies. Exposure to CI/CD pipelines for data deployment. Experience in agile development methodologies .
Posted 1 week ago
4.0 - 6.0 years
20 - 25 Lacs
Noida, Pune, Chennai
Work from Office
We are seeking a skilled and detail-oriented Data Engineer with 4 to 6 years of hands-on experience in Microsoft Fabric , Snowflake , and Matillion . The ideal candidate will play a key role in supporting MS Fabric and migrating from MS fabric to Snowflake and Matillion. Roles and Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Matillion and integrate data from various sources. Architect and optimize Snowflake data warehouses, ensuring efficient data storage, querying, and performance tuning. Leverage Microsoft Fabric for end-to-end data engineering tasks, including data ingestion, transformation, and reporting. Collaborate with data analysts, scientists, and business stakeholders to deliver high-quality, consumable data products. Implement data quality checks, monitoring, and observability across pipelines. Automate data workflows and support CI/CD practices for data deployments. Troubleshoot performance bottlenecks and data pipeline failures with a root-cause analysis mindset. Maintain thorough documentation of data processes, pipelines, and architecture. trong expertise with: Microsoft Fabric (Dataflows, Pipelines, Lakehouse, Notebooks, etc.) Snowflake (warehouse sizing, SnowSQL, performance tuning) Matillion (ETL/ELT orchestration, job optimization, connectors) Proficiency in SQL and data modeling (dimensional/star schema, normalization). Experience with Python or other scripting languages for data manipulation. Familiarity with version control tools (e.g., Git) and CI/CD workflows. Solid understanding of cloud data architecture (Azure preferred). Strong problem-solving and debugging skills.
Posted 1 week ago
4.0 - 8.0 years
6 - 16 Lacs
Bengaluru
Work from Office
ML,AI,Python,TensorFlow,Decision Trees,SageMaker,Transcribe,Lambda,Data Pipelines,Docker,CI/CD,Jenkins,GitLab
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Mumbai
Remote
Travel Requirement : will be plus if willing to travel to the UK as needed Job Description : We are seeking a highly experienced Senior Data Engineer with a background in Microsoft Fabric and have done projects in it. This is a remote position based in India, ideal for professionals who are open to occasional travel to the UK and must possess a valid passport. Key Responsibilities : - Design and implement scalable data solutions using Microsoft Fabric - Lead complex data integration, transformation, and migration projects - Collaborate with global teams to deliver end-to-end data pipelines and architecture - Optimize performance of data systems and troubleshoot issues proactively - Ensure data governance, security, and compliance with industry best practices Required Skills and Experience : - 5+ years of experience in data engineering, including architecture and development - Expertise in Microsoft Fabric, Data Lake, Azure Data Services, and related technologies - Experience in SQL, data modeling, and data pipeline development - Knowledge of modern data platforms and big data technologies - Excellent communication and leadership skills Preferred Qualifications : - Good communication skills - Understanding of data governance and security best practices Perks & Benefits : - Work-from-home flexibility - Competitive salary and perks - Opportunities for international exposure - Collaborative and inclusive work culture
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Duration : 6 Months Timings : General IST Notice Period : within 15 days or immediate joiner About The Role : As a Data Engineer for the Data Science team, you will play a pivotal role in enriching and maintaining the organization's central repository of datasets. This repository serves as the backbone for advanced data analytics and machine learning applications, enabling actionable insights from financial and market data. You will work closely with cross-functional teams to design and implement robust ETL pipelines that automate data updates and ensure accessibility across the organization. This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth. Note : Must be ready for face-to-face interview in Bangalore (last round). Should be working with Azure as cloud technology. Key Responsibilities : ETL Development : - Design, develop, and maintain efficient ETL processes for handling multi-scale datasets. - Implement and optimize data transformation and validation processes to ensure data accuracy and consistency. - Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows. Data Pipeline Architecture : - Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow. - Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines. - Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis. Data Modeling : - Design and implement data models to support analytics and reporting needs across teams. - Optimize database structures to enhance performance and scalability. Data Quality And Governance : - Develop and implement data quality checks and governance processes to ensure data integrity. - Collaborate with stakeholders to define and enforce data quality standards across the and Communication - Maintain detailed documentation of ETL processes, data models, and other key workflows. - Effectively communicate complex technical concepts to non-technical stakeholders and business Collaboration - Work closely with the Quant team and developers to design and optimize data pipelines. - Collaborate with external stakeholders to understand business requirements and translate them into technical solutions. Essential Requirements Basic Qualifications : - Bachelor's degree in Computer Science, Information Technology, or a related field. - Familiarity with big data technologies like Hadoop, Spark, and Kafka. - Experience with data modeling tools and techniques. - Excellent problem-solving, analytical, and communication skills. - Proven experience as a Data Engineer with expertise in ETL techniques (minimum years). - 3-6 years of strong programming experience in languages such as Python, Java, or Scala - Hands-on experience in web scraping to extract and transform data from publicly available web sources. - Proficiency with cloud-based data platforms such as AWS, Azure, or GCP. - Strong knowledge of SQL and experience with relational and non-relational databases. - Deep understanding of data warehousing concepts and Qualifications : - Master's degree in Computer Science or Data Science. - Knowledge of data streaming and real-time processing frameworks. - Familiarity with data governance and security best practices
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Chennai
Work from Office
Role & responsibilities University Degree in Computer Science, Information Technology, or related field 5+ years of experience in the Machine Learning Operations role Design the data pipelines and engineering infrastructure to support our clients enterprise machine learning systems at scale Take offline models data scientists build and turn them into a real machine learning production system Develop and deploy scalable tools and services for our clients to handle machine learning training and inference Identify and evaluate new technologies to improve performance, maintainability, and reliability of our clients’ machine learning systems Apply software engineering rigor and best practices to machine learning, including CI/CD, automation, etc. Support model development, with an emphasis on auditability, versioning, and data security Facilitate the development and deployment of proof-of-concept machine learning systems Communicate with clients to build requirements and track progress Strong analytic skills related to working with structured, semi structured and unstructured datasets Advanced Machine learning techniques: Decision Trees, Random Forest, Boosting Algorithm, Neural Networks, Deep Learning, Support Vector Machines, Clustering, Bayesian Networks, Reinforcement Learning, Feature Reduction
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Kochi
Remote
Senior Data Engineer (Databricks) REMOTE Location: Remote (Portugal) Type: Contract Experience: 5+ Years Language: Fluent English required We are looking for a Senior Data Engineer to join our remote consulting team. In this role, you'll be responsible for designing, building, and optimizing large-scale data processing systems using Databricks and modern data engineering tools. You’ll collaborate closely with data scientists, analysts, and technical teams to deliver scalable and reliable data platforms. Key Responsibilities Design, develop, and maintain robust data pipelines for processing structured/unstructured data Build and manage data lakes and data warehouses optimized for analytics Optimize data workflows for performance, scalability, and cost-efficiency Collaborate with stakeholders to gather data requirements and translate them into scalable solutions Implement data governance, data quality, and security best practices Migrate legacy data processes (e.g., from SAS) to modern platforms Document architecture, data models, and pipelines Required Qualifications 5+ years of experience in data engineering or related fields 3+ years of hands-on experience with Databricks Strong command of SQL and experience with PostgreSQL, MySQL, or NoSQL databases Programming experience in Python, Java, or Scala Experience with ETL processes, orchestration frameworks, and data pipeline automation Familiarity with Spark, Kafka, or similar big data tools Experience working on cloud platforms (AWS, Azure, or GCP) Prior experience migrating from SAS is a plus Excellent communication skills in English
Posted 1 week ago
2.0 - 5.0 years
7 - 17 Lacs
Hyderabad
Hybrid
We are looking for a highly skilled Data Scientist to join our team and help drive data-driven decisions and AI-powered innovation . The ideal candidate will have strong analytical and problem-solving skills, experience working with large datasets, and a deep understanding of machine learning , statistical modeling , and artificial intelligence tools including growing exposure to Generative AI (GenAI) technologies. As a Data Scientist, you will collaborate with various teams to analyze data, build predictive and generative models, and contribute to the companys strategic goals. Youll also have opportunities to explore and experiment with emerging GenAI techniques, such as large language models (LLMs), prompt engineering , and synthetic data generation to support cutting-edge solutions. Required Skills & Qualifications Bachelors or masters degree in data science, Computer Science, Statistics, Mathematics, or a related field. 2+ years of experience in a data science or applied ML role. Proficiency in Python, R, or SQL, and libraries such as scikit-learn , pandas or NumPy . Strong grasp of machine learning concepts, including regression, classification, clustering, and model evaluation. Experience with data visualization tools like Power BI, Tableau , or matplotlib/seaborn . Hands-on experience with AI/ML frameworks such as TensorFlow, PyTorch, or Keras, OpenCV. Basic familiarity with Generative AI concepts (e.g., LLMs, prompt engineering, or transformer models) and a willingness to learn and apply them. Excellent analytical and problem-solving skills, with strong attention to detail. Effective communication skills, with the ability to translate complex technical findings for business stakeholders. Preferred candidate profile Experience with cloud platforms like Azure, AWS, or GCP . Exposure to LLM APIs (e.g., OpenAI, Hugging Face Transformers) or LangChain for building GenAI prototypes. Knowledge of deep learning, neural networks , and unstructured data (e.g., text, images). Familiarity with data engineering and ETL processes. Awareness of AI ethics , bias detection, and responsible model development practices.
Posted 1 week ago
8.0 - 13.0 years
16 - 22 Lacs
Hyderabad
Work from Office
Looking for a Data Engineer with 8+ yrs exp to build scalable data pipelines on AWS/Azure, work with Big Data tools (Spark, Kafka), and support analytics teams. Must have strong coding skills in Python/Java and exp with SQL/NoSQL & cloud platforms. Required Candidate profile Strong experience in Java/Scala/Python. Worked with big data tech: Spark, Kafka, Flink, etc. Built real-time & batch data pipelines. Cloud: AWS, Azure, or GCP.
Posted 1 week ago
4.0 - 8.0 years
20 - 35 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 1 week ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Quality Engineer Data to ensure the reliability, accuracy, and performance of data pipelines and AI/ML models within our SmartFM platform . This role is critical to delivering trusted data and actionable insights that drive smart building optimization and operational efficiency. Key Responsibilities: Design and implement robust QA strategies for data pipelines , ML models , and agentic workflows . Test and validate data ingestion and streaming systems (e.g., StreamSets , Kafka ) for accuracy, completeness, and resilience. Ensure data integrity and schema validation within MongoDB and other data stores. Collaborate with Data Engineers to proactively identify and resolve data quality issues. Partner with Data Scientists to validate ML/DL/LLM model performance, fairness, and robustness. Automate testing processes using frameworks such as Pytest , Great Expectations , and Deepchecks . Monitor production pipelines for anomalies, data drift , and model degradation . Participate in code reviews , QA audits, and maintain comprehensive documentation of test plans and results. Continuously evaluate and improve QA processes based on industry best practices and emerging trends. Required Technical Skills: 510 years of QA experience with a focus on data validation and ML model testing . Strong command of SQL for complex data queries and integrity checks. Practical experience with StreamSets , Kafka , and MongoDB . Proficient in Python scripting for automation and testing. Familiarity with ML testing metrics , model validation techniques, and bias detection . Exposure to cloud platforms such as Azure , AWS , or GCP . Working knowledge of QA tools like Pytest , Great Expectations , and Deepchecks . Understanding of Node.js and React-based applications is an added advantage. Additional Qualifications: Excellent communication , documentation , and cross-functional collaboration skills. Strong analytical mindset and high attention to detail. Ability to work with cross-disciplinary teams including Engineering, Data Science, and Product. Passion for continuous learning and adoption of new QA tools and methodologies. Domain knowledge in facility management , IoT , or building automation systems is a strong plus.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough