Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 4.0 years
2 - 6 Lacs
Mumbai, Pune, Chennai
Work from Office
Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad
Posted 1 week ago
1.0 - 3.0 years
3 - 6 Lacs
Chennai
Work from Office
Skill Set Required GCP, Data Modelling (OLTP, OLAP), indexing, DBSchema, CloudSQL, BigQuery Data Modeller - Hands-on data modelling for OLTP and OLAP systems. In-Depth knowledge of Conceptual, Logical and Physical data modelling. Strong understanding of Indexing, partitioning, data sharding with practical experience of having done the same. Strong understanding of variables impacting database performance for near-real time reporting and application interaction. Should have working experience on at least one data modelling tool, preferably DBSchema. People with functional knowledge of the mutual fund industry will be a plus. Good understanding of GCP databases like AlloyDB, CloudSQL and BigQuery.
Posted 1 week ago
6.0 - 11.0 years
13 - 17 Lacs
Bengaluru
Work from Office
6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool Worked on Big Query and GCP technologies Strong SQL and Spark knowledge Excellent Data Analysis skills. Must be comfortable with querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Knowledge of Financial Accounting is a bonus Work independently with cross functional team and drive towards the resolution Experience with Object oriented programming using python and its design patterns Experience handling Unix systems, for optimal usage to host enterprise web applications GCP certifications preferred. Payments Industry Background good to have Candidate who has been part to google Cloud Migration is an ideal Fit Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 3-5 years of experience Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience 6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool
Posted 1 week ago
6.0 - 11.0 years
13 - 17 Lacs
Gurugram
Work from Office
6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool Worked on Big Query and GCP technologies Strong SQL and Spark knowledge Excellent Data Analysis skills. Must be comfortable with querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Knowledge of Financial Accounting is a bonus Work independently with cross functional team and drive towards the resolution Experience with Object oriented programming using python and its design patterns Experience handling Unix systems, for optimal usage to host enterprise web applications GCP certifications preferred. Payments Industry Background good to have Candidate who has been part to google Cloud Migration is an ideal Fit Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 3-5 years of experience Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience 6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool
Posted 1 week ago
3.0 - 6.0 years
10 - 14 Lacs
Bengaluru
Work from Office
As an entry level Application Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems. In your role, you may be responsible for: Working across the entire system architecture to design, develop, and support high quality, scalable products and interfaces for our clients Collaborate with cross-functional teams to understand requirements and define technical specifications for generative AI projects Employing IBM's Design Thinking to create products that provide a great user experience along with high performance, security, quality, and stability Working with a variety of relational databases (SQL, Postgres, DB2, MongoDB), operating systems (Linux, Windows, iOS, Android), and modern UI frameworks (Backbone.js, AngularJS, React, Ember.js, Bootstrap, and JQuery) Creating everything from mockups and UI components to algorithms and data structures as you deliver a viable product Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL authoring, query, and cost optimisation primarily on Big Query. Python as an object-oriented scripting language. Data pipeline, data streaming and workflow management toolsDataflow, Pub Sub, Hadoop, spark-streaming Version control systemGIT & Preferable knowledge of Infrastructure as CodeTerraform. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience performing root cause analysis on internal and external data and processes to answer specific business questions. Preferred technical and professional experience Experience building and optimising data pipelines, architectures and data sets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with experience in a Data Engineer role, who are also familiar with Google Cloud Platform.
Posted 1 week ago
2.0 - 6.0 years
7 - 11 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities includeComprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done. Expert in SQL – can do data analysis and investigation using Sql queries Implementation Knowledge Advance Sql functions like – Regular Expressions, Aggregation, Pivoting, Ranking, Deduplication etc. BigQuery and BigQuery Transformation (using Stored Procedures) Data modelling concepts – Star & Snowflake schemas, Fact & Dimension table, Joins, Cardinality etc Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform
Posted 1 week ago
1.0 - 4.0 years
7 - 11 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Location - Bangalore, Mumbai, Pune, Chennai, Hyderabad, Kolkata, Delhi Technical Skills: Data strategist with expertise in Product Growth, Conversion Rate Optimization (experiments) Personalisation using digital analytics skills such as Adobe Analytics, Google Analytics, and others (Quantitative and Qualitative)Proven success in acquisition marketing and retention marketing by leveraging strategic and tactical implementation of Conversion Rate Optimization (A/B Testing) and Product Optimization (Landing Page Optimization, Product-Market Fit) through data insights.Personalisation - It includes customer segmentation, targeted messaging, dynamic content, recommendations, behavioral targeting, and AI-powered personalization.Skilled in leveraging advanced analytics tools for actionable data extraction from extensive datasets such as SQL, Big Query, Excel, Python for data analysis, Power BI, and Data StudioProficient in implementing digital analytics measurement across diverse domains using tools such as Adobe Launch, Google Tag Manager, and TealiumSoft skills:Experience in client-facing projects & stakeholder mgmt, excellent communication skillsCollaborative team player, aligning product vision with business objectives cross-functionallyAvid learner, staying current with industry trends and emerging technologiesCommitted to delivering measurable results through creative thinking, exceeding performance metrics, and fostering growth
Posted 1 week ago
5.0 - 9.0 years
15 - 20 Lacs
Hyderabad
Hybrid
About Us: Our global community of colleagues bring a diverse range of experiences and perspectives to our work. You'll find us working from a corporate office or plugging in from a home desk, listening to our customers and collaborating on solutions. Our products and solutions are vital to businesses of every size, scope and industry. And at the heart of our work youll find our core values: to be data inspired, relentlessly curious and inherently generous. Our values are the constant touchstone of our community; they guide our behavior and anchor our decisions. Designation: Software Engineer II Location: Hyderabad KEY RESPONSIBILITIES Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) Expert level programming skills on Python Expert level programming skills on Spark Cloud Based Infrastructure: GCP Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. Exposure to Apache Airflow for scheduling jobs Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning Create POCs to enable new workloads and technical capabilities on the Platform. Work with the platform and infrastructure engineers to implement these capabilities in production. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. Participate in planning activities, Data Science and perform activities to increase platform skills KEY Requirements Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. Minimum 3+ years of experience on Spark Minimum 3 years of experience in Cloud environments, preferably GCP Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: Any experience with NoSQL and Graph databases Informatica or StreamSets Data integration (ETL/ELT) Exposure to role and attribute based access controls Hands on experience with managing solutions deployed in the Cloud, preferably on GCP Experience working in a Global company, working in a DevOps model is a plus Dun & Bradstreet is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, creed, sex, age, national origin, citizenship status, disability status, sexual orientation, gender identity or expression, pregnancy, genetic information, protected military and veteran status, ancestry, marital status, medical condition (cancer and genetic characteristics) or any other characteristic protected by law. We are committed to Equal Employment Opportunity and providing reasonable accommodations to qualified candidates and employees. If you are interested in applying for employment with Dun & Bradstreet and need special assistance or an accommodation to use our website or to apply for a position, please send an e-mail with your requesttoacquisitiont@dnb.com Determinationon requests for reasonable accommodation are made on a case-by-case basis.
Posted 1 week ago
6.0 - 9.0 years
7 - 14 Lacs
Hyderabad
Work from Office
Role Overview: We are seeking a talented and forward-thinking Data Engineer for one of the large financial services GCC based in Hyderabad with responsibilities that include designing and constructing data pipelines, integrating data from multiple sources, developing scalable data solutions, optimizing data workflows, collaborating with cross-functional teams, implementing data governance practices, and ensuring data security and compliance. Technical Requirements: 1. Proficiency in ETL, Batch, and Streaming Process 2. Experience with BigQuery, Cloud Storage, and CloudSQL 3. Strong programming skills in Python, SQL, and Apache Beam for data processing 4. Understanding of data modeling and schema design for analytics 5. Knowledge of data governance, security, and compliance in GCP 6. Familiarity with machine learning workflows and integration with GCP ML tools 7. Ability to optimize performance within data pipelines Functional Requirements: 1. Ability to collaborate with Data Operations, Software Engineers, Data Scientists, and Business SMEs to develop Data Product Features 2. Experience in leading and mentoring peers within an existing development team 3. Strong communication skills to craft and communicate robust solutions 4. Proficient in working with Engineering Leads, Enterprise and Data Architects, and Business Architects to build appropriate data foundations 5. Willingness to work on contemporary data architecture in Public and Private Cloud environments T his role offers a compelling opportunity for a seasoned Data Engineering to drive transformative cloud initiatives within the financial sector, leveraging unparalleled experience and expertise to deliver innovative cloud solutions that align with business imperatives and regulatory requirements . Qualification Engineering Grad / Postgraduate CRITERIA 1. Proficient in ETL, Python, and Apache Beam for data processing efficiency. 2. Demonstrated expertise in BigQuery, Cloud Storage, and CloudSQL utilization. 3. Strong collaboration skills with cross-functional teams for data product development. 4. Comprehensive knowledge of data governance, security, and compliance in GCP. 5. Experienced in optimizing performance within data pipelines for efficiency. 6. Relevant Experience: 6-9 years Connect at 9993809253
Posted 1 week ago
5.0 - 8.0 years
14 - 22 Lacs
Navi Mumbai, Gurugram, Mumbai (All Areas)
Work from Office
Job Description (GCP Data Engineer) :- GCP Developer 5to 9 years of experience Location : Mumbai and Gurugram designing, building and deploying cloud solution for enterprise applications, with expertise in Cloud Platform Engineering. Expertise in application migration projects including optimizing technical reliability and improving application performance Good understanding of cloud security frameworks and cloud Security standards Solid knowledge and extensive experience of GCP and its cloud services. Experience with GCP services as Compute Engine, Dataproc, Dataflow, Big Query, Secret Manager, Kubernetes Engine and Cloud Functions. Experiencing in Google storage products like Cloud Storage, Persistent Disk, Nearline, Coldline and Cloud Filestore Experience in Database products like Datastore, Cloud SQL, Cloud Spanner & Cloud Bigtable Experience with implementing containers using cloud native container orchestrators in GCP Strong cloud programming skill with experience in API and Cloud Functions development using Python Hands-on experience with enterprise config & DevOps tools including Ansible, BitBucket, Git, Jira and Confluence. Strong knowledge of cloud Security practices and Cloud IAM Policy preparation for GCP Good hands-on experience in Pyspark, preferably more than 3 years Should have good knowledge of Python and spark concepts Develop and maintain data pipelines and ETL processes using Python and Pyspark. Design, implement, and optimize Spark jobs for performance and scalability. Perform data analysis and troubleshooting to ensure data quality and reliability. Ability to participate in fast-paced DevOps and System Engineering teams within Scrum agile processes •Should have Understanding of data modeling Data warehousing concepts Understand the current application infrastructure and suggest changes to it. Interested candidates please share your CVs to mudesh.kumar.tpr@pwc.com
Posted 1 week ago
5.0 - 10.0 years
11 - 21 Lacs
Pune
Work from Office
Role: Data Engineer Mixed Media Model (MMM) Exp: 4 years + Location: Pune/ Remote Job Summary: The ideal candidate will have strong experience building scalable ETL pipelines and working with both online and offline marketing data to support MMM, attribution, and ROI analysis. The role requires close collaboration with data scientists and marketing teams to deliver clean, structured datasets for modeling. Mandatory Skills: Strong proficiency in SQL and Python or Scala Hands-on experience with cloud platforms (preferably GCP/BigQuery) Proven experience with ETL tools like Apache Airflow or DBT Experience integrating data from multiple sources: digital platforms (Google Ads, Meta), CRM, POS, TV, Radio, etc. Understanding of Media Mix Modeling (MMM) and attribution methodologies Good to have skill: Experience with data visualization tools (Tableau, Looker, Power BI) Exposure to statistical modeling techniques Please share your resume at Neesha1@damcogroup.com
Posted 1 week ago
4.0 - 6.0 years
5 - 10 Lacs
Bengaluru
Work from Office
4+ years of Testing Experience and at least 2 years in ETL Testing and automation Experience of automating ETL flows Experience of development of automation framework for ETL Good coding skills in Python and PytestExpert at Test Data Analysis Test design Good at Database Analytics(ETL or BigQuery) Having snowflake knowledge is a plus Good communication skills with customers and other stakeholders Capable of working independently or with little supervision
Posted 1 week ago
4.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
4+ years of experience in software development using Java/J2EE technologies Exposure to Microservices and RESTFul API development with Java, Spring Framework 4+ years of experience in database technologies with exposure to NoSQL technologies 4 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Working experience with frontend technology like ReactJS or any other JavaScript frameworks
Posted 1 week ago
3.0 - 7.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.
Posted 1 week ago
10.0 - 14.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Program Manager for managing Accounts Will be owning the account delivery and customer relationship KRA includes - Program Management, Relationship Management, Account PnL, Understanding of technologies (MS, Java, Data AI, Gen AI, Testing - Automation Manual and People Management
Posted 1 week ago
3.0 - 8.0 years
12 - 16 Lacs
Mangaluru, Hyderabad, Bengaluru
Work from Office
We're looking for a Senior Backend Developer who thrives at the intersection of software engineering and data engineering . This role involves architecting and optimizing complex, high-throughput backend systems that power data-driven products at scale. If you have deep backend chops, strong database expertise across RDBMS platforms, and hands-on experience with large-scale data workflows, we'd love to hear from you. Key Responsibilities 1. Leadership Project Delivery Lead backend development teams, ensuring adherence to Agile practices and development best practices. Collaborate across product, frontend, DevOps, and data teams to design, build, and deploy robust features and services. Drive code quality through reviews, mentoring, and enforcing design principles. 2. Research Innovation Conduct feasibility studies on emerging technologies, frameworks, and methodologies. Design and propose innovative solutions for complex technical challenges using data-centric approaches. Continuously improve system design with a forward-thinking mindset. 3. System Architecture Optimization Design scalable, distributed, and secure system architectures. Optimize and refactor legacy systems to improve performance, maintainability, and scalability. Define best practices around observability, logging, and resiliency. 4. Database Data Engineering Design, implement, and optimize relational databases (PostgreSQL, MySQL, SQL Server, etc.). Develop efficient SQL queries, stored procedures, indexes, and schema migrations. Collaborate with data engineering teams on ETL/ELT pipelines , data ingestion, transformation, and warehousing. Work with large datasets , batch processing, and streaming data (e.g., Kafka, Spark, Airflow). Ensure data integrity, consistency, and security across backend and analytics pipelines. Must-Have Skills Backend Development: TypeScript, Node.js (or equivalent backend framework), REST/GraphQL API design. Databases Storage: Strong proficiency in PostgreSQL , plus experience with other RDBMS like MySQL , SQL Server , or Oracle . Familiarity with NoSQL (e.g., Redis, MongoDB) and columnar/OLAP stores (e.g., ClickHouse, Redshift). Awareness on Data Engineering : Hands-on work with data ingestion , transformation , pipelines , and data orchestration tools. Exposure to tools like Apache Airflow , Kafka , Spark , or dbt . Cloud Infrastructure: Proficiency with AWS (Lambda, EC2, RDS, S3, IAM, CloudWatch). DevOps CI/CD: Experience with Docker, Kubernetes, GitHub Actions or similar CI/CD pipelines. Architecture: Experience designing secure, scalable, and fault-tolerant backend systems. Agile SDLC: Strong understanding of Agile workflows, SDLC best practices, and version control (Git). Nice-to-Have Skills Experience with event-driven architectures or microservices . Exposure to data warehouse environments (e.g., Snowflake, BigQuery). Knowledge of backend-for-frontend collaboration (especially with React.js). Familiarity with data cataloging, data governance, and lineage tools. Preferred Qualifications Bachelor's or Master's in Computer Science, Software Engineering, or a related technical field. Proven experience leading backend/data projects in enterprise or startup environments. Strong system design, analytical, and problem-solving skills. Awareness of cybersecurity best practices in cloud and backend development.
Posted 1 week ago
4.0 - 8.0 years
9 - 12 Lacs
Chennai
Work from Office
Job Title: Data Engineer Location: Chennai (Hybrid) Summary Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. Accountability Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Skills 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's orMaster's degree in computer science, information systems, or a relatedfield. Provenexperience as a Data Engineer or similar role with a focus on healthcare data.
Posted 1 week ago
5.0 - 7.0 years
5 - 9 Lacs
Chennai
Work from Office
Design,develop, and maintain scalable data pipelines and systems to support thecollection, integration, and analysis of healthcare and enterprise data. Theprimary responsibilities of this role include designing and implementingefficient data pipelines, architecting robust data models, and adhering to datamanagement best practices. In this position, you will play a crucial part intransforming raw data into meaningful insights, through development of semanticdata layers, enabling data-driven decision-making across the organization. Theideal candidate will possess strong technical skills, a keen understanding ofdata architecture, and a passion for optimizing data processes. What you will do Design and implement scalable and efficient data pipelines to acquire, transform, and integrate data from various sources, such as electronic health records (EHR), medical devices, claims data, and back-office enterprise data Develop data ingestion processes, including data extraction, cleansing, and validation, ensuring data quality and integrity throughout the pipeline Collaborate with cross-functional teams, including subject matter experts, analysts, and engineers, to define data requirements and ensure data pipelines meet the needs of data-driven initiatives Design and implement data integration strategies to merge disparate datasets, enabling comprehensive and holistic analysis Implement data governance practices and ensure compliance with healthcare data standards, regulations (e.g., HIPAA), and security protocols Monitor and troubleshoot pipeline and data model performance, identifying and addressing bottlenecks, and ensuring optimal system performance and data availability Design and implement data models that align with domain requirements, ensuring efficient data storage, retrieval, and delivery Apply data modeling best practices and standards to ensure consistency, scalability, and reusability of data models Implement data quality checks and validation processes to ensure the accuracy, completeness, and consistency of healthcare data Develop and enforce data governance policies and procedures, including data lineage, architecture, and metadata management Collaborate with stakeholders to define data quality metrics and establish data quality improvement initiatives Document data engineering processes, methodologies, and data flows for knowledge sharing and future reference Stay up to date with emerging technologies, industry trends, and healthcare data standards to drive innovation and ensure compliance Who you are 4+ years strong programming skills in object-oriented languages such as Python Proficiency in SQL Hands on experience with data integration tools, ETL/ELT frameworks, and data warehousing concepts Hands on experience with data modeling and schema design, including concepts such as star schema, snowflake schema and data normalization Familiarity with healthcare data standards (e.g., HL7, FHIR), electronic health records (EHR), medical coding systems (e.g., ICD-10, SNOMED CT), and relevant healthcare regulations (e.g., HIPAA) Hands on experience with big data processing frameworks such as Apache Hadoop, Apache Spark, etc. Working knowledge of cloud computing platforms (e.g., AWS, Azure, GCP) and related services (e.g., DMS, S3, Redshift, BigQuery) Experience integrating heterogeneous data sources, aligning data models and mapping between different data schemas Understanding of metadata management principles and tools for capturing, storing, and managing metadata associated with data models and semantic data layers Ability to track the flow of data and its transformations across data models, ensuring transparency and traceability Understanding of data governance principles, data quality management, and data security best practices Strong problem-solving and analytical skills with the ability to work with complex datasets and data integration challenges Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams Education Bachelor's or Master's degree in computer science, information systems, or a relatedfield. Proven experience as a Data Engineer or similar role with a focus on healthcaredata. Soft Skills: Attention to detail. Good oral and written communication skills in English language. Or Proficient in English communication, both written and verbal. Dedicated self-starter with excellent people skills. Quick learner and a go-getter. Effective time and project management. Analytical thinker and a great team player. Strong leadership, interpersonal &problem-solving skills
Posted 1 week ago
1.0 - 5.0 years
9 - 13 Lacs
Mumbai, Gurugram, Bengaluru
Work from Office
We are seeking a talented and motivated Data Scientist with 1-3 years of experience to join our Data Science team. If you have a strong passion for data science, expertise in machine learning, and experience working with large-scale datasets, we want to hear from you. As a Data Scientist at RevX, you will play a crucial role in developing and implementing machine learning models to drive business impact. You will work closely with teams across data science, engineering, product, and campaign management to build predictive models, optimize algorithms, and deliver actionable insights. Your work will directly influence business strategy, product development, and campaign optimization. Major Responsibilities: Develop and implement machine learning models, particularly neural networks, decision trees, random forests, and XGBoost, to solve complex business problems. Work on deep learning models and other advanced techniques to enhance predictive accuracy and model performance. Analyze and interpret large, complex datasets using Python, SQL, and big data technologies to derive meaningful insights. Collaborate with cross-functional teams to design, build, and deploy end-to-end data science solutions, including data pipelines and model deployment frameworks. Utilize advanced statistical techniques and machine learning methodologies to optimize business strategies and outcomes. Evaluate and improve model performance, calibration, and deployment strategies for real-time applications. Perform clustering, segmentation, and other unsupervised learning techniques to discover patterns in large datasets. Conduct A/B testing and other experimental designs to validate model performance and business strategies. Create and maintain data visualizations and dashboards using tools such as matplotlib, seaborn, Grafana, and Looker to communicate findings. Provide technical expertise in handling big data, data warehousing, and cloud-based platforms like Google Cloud Platform (GCP). Required Experience/Skills: Bachelors or Masters degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. 1-3 years of experience in data science or machine learning roles. Strong proficiency in Python for machine learning, data analysis, and deep learning applications. Experience in developing, deploying, and monitoring machine learning models, particularly neural networks, and other advanced algorithms. Expertise in handling big data technologies, with experience in tools such as BigQuery and cloud platforms (GCP preferred). Advanced SQL skills for data querying and manipulation from large datasets. Experience in data visualization tools like matplotlib, seaborn, Grafana, and Looker. Strong understanding of A/B testing, statistical tests, experimental design, and methodologies. Experience in clustering, segmentation, and other unsupervised learning techniques. Strong problem-solving skills and the ability to work with complex datasets and machine learning pipelines. Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Preferred Skills: Experience with deep learning frameworks such as TensorFlow or PyTorch. Familiarity with data warehousing concepts and big data tools. Knowledge of MLOps practices, including model deployment, monitoring, and management. Experience with business intelligence tools and creating data-driven dashboards. Understanding of reinforcement learning, natural language processing (NLP), or other advanced AI techniques. Education: Bachelor of Engineering or similar degree from any reputed University.
Posted 1 week ago
8.0 - 12.0 years
13 - 17 Lacs
Gurugram
Work from Office
As a Technical Lead, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team close collaboration with business and product teams. Primary Responsibilities: Work collaboratively with a core team of architects, business teams, and developers spread across different locations to assist solutions of the requirement from a technical perspective. Coordinate closely with the design and analysis teams to fill gaps in the Product Requirement Documents (PRDs), identify the edge use cases, and build a full-proof solution. Lead and mentor a team of technical engineers, providing guidance and support in the execution of product development projects. Proficient with practices like Test Driven Development, Continuous Integration, Continuous Delivery, and Infrastructure automation. Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL, and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed massive data processing technologies like Spark, Trino or similar with proven experience in event-driven data pipelines. Experience with Data warehouses like BigQuery, Redshift, or similar. Experience with Testing Code Coverage tools like Jest, JUnit, Sonarqube, or similar. Experience with CI/CD tools like Jenkins, AWS CodePipelines, or similar. Experience improving the security posture of the product. Research and build POCs using available frameworks to ensure feasibility. Create technical design documents and present the architectural details to a larger audience. Participate in architecture and design reviews for projects that require complex technical solutions. Experience with Microservices Architecture and exposure to cloud services platform GCP/AWS. Develop reusable frameworks/components and POCs to accelerate the development of projects. Discover third-party APIs/accounts for integration purposes (and in a cost-effective manner). Responsible for making the overall product architecture scalable from a future perspective. Establish and maintain engineering processes, standards, and best practices to ensure consistency and quality across projects. Coordinate with cross-functional teams to resolve technical challenges, mitigate risks, and ensure timely delivery of products. Stay updated with the latest trends and advancements in related technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to prioritize the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Lead the Risk Management and Incident Management activities. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Experience with Agile project management tools such as Jira, and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 10+ years of experience with a proven track record of successfully leading cross-functional engineering teams in the development and delivery of complex products. Strong facilitation and coaching skills, with the ability to guide teams through Agile ceremonies and practices. Excellent communication and interpersonal skills, with the ability to build rapport and trust with team members and stakeholders. Proven ability to identify and resolve impediments or conflicts that may arise during the development process. Ability to thrive in a fast-paced, dynamic environment and adapt quickly to changing priorities. Continuous growth and learner mindset with a passion for Agile principles and practices and a commitment to ongoing professional development. Have experience with Agile methodologies and participated in sprints and scrums. Ability to take ownership of complex tasks and deliver while mentoring team members
Posted 1 week ago
2.0 - 6.0 years
10 - 14 Lacs
Mumbai, Gurugram, Bengaluru
Work from Office
As a Software Engineer at RevX, you will take part in the implementation and delivery of robust features/products. You will work closely with software engineers, data engineers, engineering managers and product managers to build and ship new features that optimize customer engagement, and operational efficiency, and drive growth. Required Experience/Skills: Relevant work experience of 4+ years of software development role Strong in Data Structures, Algorithms, and Problem-Solving Skills Experience working on Java, J2EE, and Spring Boot applications. Experience in programming with Messaging (Kafka, RabbitMQ), Caching (Ehcache, Radis) Experience in programming with Mysql, Elastic search and Bigquery Experience working knowledge of Angular8+ (TypeScript) Experience with HTML5, CSS3, JavaScript ECMA5/6, and UI Frameworks. Working Knowledge of Unix environment (shell, scripting) Strong experience in Agile and Test-Driven Methodologies "Self-starter" attitude and the ability to make decisions independently Proficient understanding of deployment process of client-side application Proficient understanding of code versioning tools, such as Git or SVN Hands-on experience in responsive design Major Responsibilities: Design and build complex systems that can scale rapidly with little maintenance. Design and implement effective service/product interfaces. Able to lead and successfully complete software projects without major guidance from a manager/lead. Provide technical support for many applications within the technology portfolio. Respond to and troubleshoot complex problems quickly, efficiently, and effectively. Handle multiple competing priorities in an agile, fast-paced environment. Create and maintain documentation for your projects. Education: Bachelor of Engineering or similar degree from any reputed University.
Posted 1 week ago
5.0 - 9.0 years
12 - 16 Lacs
Mumbai, Gurugram, Bengaluru
Work from Office
Research and Problem-Solving: Identify and frame business problems, conduct exploratory data analysis, and propose innovative data science solutions tailored to business needs. Leadership & Communication: Serve as a technical referent for the research team, driving high-impact, high-visibility initiatives. Effectively communicate complex scientific concepts to senior stakeholders, ensuring insights are actionable for both technical and non-technical audiences. Mentor and develop scientists within the team, fostering growth and technical excellence. Algorithm Development: Design, optimize, and implement advanced machine learning algorithms, including neural networks, ensemble models (XGBoost, random forests), and clustering techniques. End-to-End Project Ownership: Lead the development, deployment, and monitoring of machine learning models and data pipelines for large-scale applications. Model Optimization and Scalability: Focus on optimizing algorithms for performance and scalability, ensuring robust, well-calibrated models suitable for real-time environments. A/B Testing and Validation: Design and execute experiments, including A/B testing, to validate model effectiveness and business impat. Big Data Handling: Leverage tools like BigQuery, advanced SQL, and cloud platforms (e.g., GCP) to process and analyze large datasets. Collaboration and Mentorship: Work closely with engineering, product, and campaign management teams, while mentoring junior data scientists in best practices and advanced techniques. Data Visualization: Create impactful visualizations using tools like matplotlib, seaborn, Looker, and Grafana to communicate insights effectively to stakeholders. Required Experience/Skills 5–8 years of hands-on experience in data science or machine learning roles. 2+ years leading data science projects in AdTech Strong hands-on skills in Advanced Statistics, Machine Learning, and Deep Learning. Demonstrated ability to implement and optimize neural networks and other advanced ML models. Proficiency in Python for developing machine learning models, with a strong grasp of TensorFlow or PyTorch. Expertise handling large datasets using advanced SQL and big data tools like BigQuery In-depth knowledge of MLOps pipelines, from data preprocessing to deployment and monitoring. Strong background in A/B testing, statistical analysis, and experimental design. Proven capability in clustering, segmentation, and unsupervised learning methods. Strong problem-solving and analytical skills with a focus on delivering business value. Education: A Master’s in Data Science, Computer Science, Mathematics, Statistics, or a related field is preferred. A Bachelor's degree with exceptional experience will also be considered.
Posted 1 week ago
3.0 - 6.0 years
8 - 12 Lacs
Gurugram
Work from Office
Were looking for a skilled Node.js Developer with a strong foundation in data engineering to join our engineering team. Youll be responsible for building scalable backend systems using modern Node.js frameworks and tools, while also designing and maintaining robust data pipelines and integrations. Primary Responsibilities: Build and maintain performant APIs and backend services using Node.js and frameworks like Express.js, NestJS, or Fastify. Develop and manage ETL/ELT pipelines, data models, schemas, and data transformation logic for analytics and operational use. Ensure data quality, integrity, and consistency through validation, monitoring, and logging. Work with database technologies (MySQL, PostgreSQL, MongoDB, Redis) to store and manage application and analytical data. Implement integrations with third-party APIs and internal microservices. Use ORMs like Sequelize, TypeORM, or Prisma for data modeling and interaction. Write unit, integration, and E2E tests using frameworks such as Jest, Mocha, or Supertest. Collaborate with frontend, DevOps, and data engineering teams to ship end-to-end features. Monitor and optimize system performance, logging (e.g., Winston, Pino), and error handling. Contribute to CI/CD workflows and infrastructure automation using tools like PM2, Docker and Jenkins. Required Skills: 3+ years of experience in backend development using Node.js. Hands-on experience with Express.js, NestJS, or other Node.js frameworks. Understanding of data modelling, partitioning, indexing, and query optimization. Experience in building and maintaining data pipelines, preferably using custom Node.js scripts. Familiarity with stream processing and messaging systems (e.g., Kafka, RabbitMQ, or Redis Streams). Solid understanding of SQL and NoSQL data stores and schema design. Strong knowledge of JavaScript and preferably TypeScript. Familiarity with cloud platforms (AWS/GCP/Azure) and services like S3, Lambda, or Cloud Functions. Experience with containerized environments (Docker) and CI/CD. Experience with data warehouses (e.g., BigQuery, Snowflake, Redshift). Nice To Have: Cloud Certification in AWS or GCP. Experience with distributed processing tools (eg. Spark, Trino/Presto) Experience with Data Transformation tool (ex. DBT, SQLMesh) and Data Orchestration (ex. Apache Airflow, Kestra etc) Familiarity with Serverless architectures and tools like Vercel/Netlify for deployment
Posted 1 week ago
3.0 - 6.0 years
10 - 15 Lacs
Gurugram, Bengaluru
Work from Office
3+ years of experience in data science roles, working with tabular data in large-scale projects. Experience in feature engineering and working with methods such as XGBoost, LightGBM, factorization machines , and similar algorithms. Experience in adtech or fintech industries is a plus. Familiarity with clickstream data, predictive modeling for user engagement, or bidding optimization is highly advantageous. MS or PhD in mathematics, computer science, physics, statistics, electrical engineering, or a related field. Proficiency in Python (3.9+), with experience in scientific computing and machine learning tools (e.g., NumPy, Pandas, SciPy, scikit-learn, matplotlib, etc.). Familiarity with deep learning frameworks (such as TensorFlow or PyTorch) is a plus. Strong expertise in applied statistical methods, A/B testing frameworks, advanced experiment design, and interpreting complex experimental results. Experience querying and processing data using SQL and working with distributed data storage solutions (e.g., AWS Redshift, Snowflake, BigQuery, Athena, Presto, MinIO, etc.). Experience in budget allocation optimization, lookalike modeling, LTV prediction, or churn analysis is a plus. Ability to manage multiple projects, prioritize tasks effectively, and maintain a structured approach to complex problem-solving. Excellent communication and collaboration skills to work effectively with both technical and business teams.
Posted 1 week ago
7.0 - 12.0 years
20 - 25 Lacs
Gurugram
Work from Office
As a Technical Lead, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2