Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
25 - 35 Lacs
Noida, Pune, Bengaluru
Work from Office
Description: We are seeking a proficient Data Governance Engineer to lead the development and management of robust data governance frameworks on Google Cloud Platform (GCP). The ideal candidate will bring in-depth expertise in data management, metadata frameworks, compliance, and security within cloud environments to ensure high-quality, secure, and compliant data practices aligned with organizational goals. Requirements: 4+ years of experience in data governance, data management, or data security. Hands-on experience with Google Cloud Platform (GCP) including BigQuery, Dataflow, Dataproc, and Google Data Catalog. Strong command over metadata management, data lineage, and data quality tools (e.g., Collibra, Informatica). Deep understanding of data privacy laws and compliance frameworks. Proficiency in SQL and Python for governance automation. Experience with RBAC, encryption, and data masking techniques. Familiarity with ETL/ELT pipelines and data warehouse architectures. Job Responsibilities: Develop and implement comprehensive data governance frameworks , focusing on metadata management, lineage tracking , and data quality. Define, document, and enforce data governance policies, access control mechanisms, and security standards using GCP-native services such as IAM, DLP, and KMS. Manage metadata repositories using tools like Collibra, Informatica, Alation, or Google Data Catalog. Collaborate with data engineering and analytics teams to ensure compliance with GDPR, CCPA, SOC 2, and other regulatory standards. Automate processes for data classification, monitoring, and reporting using Python and SQL. Support data stewardship initiatives including the development of data dictionaries and governance documentation. Optimize ETL/ELT pipelines and data workflows to meet governance best practices. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 week ago
6.0 - 11.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Location - Bangalore Skills and Qualifications : At least 6+ years' relevant experience would generally be expected to find the skills required for this role. 6+ years of being a practitioner in data engineering or a related field. Strong programming skills in Python, with experience in data manipulation and analysis libraries (e.g., Pandas, NumPy, Dask). Proficiency in SQL and experience with relational databases (e.g., Sybase, DB2, Snowflake, PostgreSQL, SQL Server). Experience with data warehousing concepts and technologies (e.g., dimensional modeling, star schema, data vault modeling, Kimball methodology, Inmon methodology, data lake design). Familiarity with ETL/ELT processes and tools (e.g., Informatica PowerCenter, IBM DataStage, Ab Initio) and open-source frameworks for data transformation (e.g., Apache Spark, Apache Airflow). Experience with message queues and streaming platforms (e.g., Kafka, RabbitMQ). Experience with version control systems (e.g., Git). Experience using Jupyter notebooks for data exploration, analysis, and visualization. Excellent communication and collaboration skills. Ability to work independently and as part of a geographically distributed team. Nice to have Understanding of any cloud-based application development & Dev Ops. Understanding of business intelligence tools - Tableau,PowerBI Understanding of Trade Lifecycle / Financial markets. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You have 8 years of Java experience and possess strong expertise in Spring Boot. Additionally, you have hands-on experience with Java-spark and are proficient in writing complex SQL and PL/SQL queries for Data Analysis, Data Lineage, and Reconciliation, preferably in PostgreSQL or Oracle databases. You have a track record of creating Data Lineage documents, Source to Target (STM) Mapping documents, and Low-Level Technical Specification Documents. Moreover, you have experience in designing and implementing ETL/ELT frameworks for complex data warehouses and data marts. You are known for your hands-on development mentality and have a proactive approach to troubleshooting and solving complex problems. Desirable skills and experiences include having worked in the Life Insurance Industry, demonstrating a keen ability to prioritize and manage multiple assignments efficiently, and having experience in an On-site/Off-site Development Model. This is a full-time position requiring expertise in Java (8 years), Spark (8 years), and Spring Boot (8 years). The work location for this role is in person.,
Posted 1 week ago
8.0 - 13.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Job Description: We are seeking a visionary and experienced Senior Data Architect to lead the design and implementation of our enterprise-wide data architecture. The role requires a solid foundation in Java, Spring, SQL , and strong knowledge of modern data platforms and cloud technologies like Azure Databricks, Snowflake, BigQuery , etc. You will be responsible for modernizing our data infrastructure, ensuring security and accessibility of data assets, and providing strategic direction to the data team. Key Responsibilities: Define and implement enterprise data architecture aligned with organizational goals. Design and lead scalable, secure, and resilient data platforms for structured & unstructured data. Architect cloud-native data solutions using tools like Databricks, Snowflake, Redshift, BigQuery . Lead design and integration of data lakes, warehouses, and ETL pipelines. Collaborate with cross-functional teams and leadership to define data needs and deliver solutions. Guide data engineers and analysts in best practices, modeling, and governance. Drive initiatives around data quality, metadata, lineage, and master data management (MDM) . Ensure compliance with data privacy regulations (GDPR, HIPAA, CCPA). Lead modernization/migration of legacy systems to modern cloud platforms. Must-Have Skills: Strong expertise in Java, Spring Framework, and SQL . Experience with Azure Databricks or similar cloud data platforms. Hands-on with Snowflake , BigQuery , Redshift , or Azure Synapse . Deep understanding of data modeling tools like Erwin or ER/Studio. Proven experience designing data platforms in hybrid/multi-cloud setups. Strong background in ETL/ELT pipelines , data APIs , and integration . Proficient in Python or similar languages used for data engineering. Knowledge of DevOps and CI/CD processes in data pipelines. Preferred Qualifications: 10+ years of experience in Data Architecture. At least 3 years in a senior or lead role. Familiarity with data governance , security policies , identity management , and RBAC . Excellent leadership, communication, and stakeholder management skills.
Posted 1 week ago
10.0 - 15.0 years
20 Lacs
Chennai
Work from Office
Candidate Specification: Any Graduate, Min 10+; years relevant Experience; Job Description: Strong hands-on experience with the following: Snowflake, Redshift, Big Query.; Proficiency in; Data Build Tool - DBT and SQL-based data modeling and transformation. Solid understanding of data warehousing concepts, star/snowflake schemas, and performance optimization. Experience with modern ETL/ELT tools and cloud-based data pipeline frameworks. Familiarity with version control systems (e.g., Git) and CI/CD practices for data workflows. Strong problem-solving skills and attention to detail. Should have excellent Inter Personal skill. Contact Person: Deepikad Email ID : deepikad@gojobs.biz
Posted 1 week ago
15.0 - 20.0 years
2 - 5 Lacs
Hyderabad
Work from Office
Project Role : Quality Engineer (Tester) Project Role Description : Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process. Must have skills : Data Warehouse ETL Testing Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationKey responsibilities:Perform QA for 'data build tool' code/loadsDevelop and execute test plans, test cases, and test scripts to ensure quality deliverablesIdentify, document, and track software defects to resolutionProvide feedback on usability and functionality.Conduct functional, regression, and performance testing. Technical Experience:Technical Skills: - Must To Have Skills: Proficiency in Data Warehouse ETL Testing and writing complex SQL.- Good To Have Skills: Experience with Data Building Tool, Functional Test Planning, Oracle Procedural Language Extensions to SQL (PLSQL).- Strong understanding of ETL/ELT processes and data warehouse concepts.- Experience in testing tools and methodologies.- Knowledge of SQL queries and database testing.- Ability to analyze and interpret complex data sets.- Must have experience working in agile teams Professional Attributes:Good communication skills Educational Qualification:Bachelor of engineering or equivalent degree Qualification 15 years full time education
Posted 1 week ago
15.0 - 20.0 years
2 - 5 Lacs
Pune
Work from Office
Project Role : Quality Engineer (Tester) Project Role Description : Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process. Must have skills : Data Warehouse ETL Testing Good to have skills : Functional Test Planning, Oracle Procedural Language Extensions to SQL (PLSQL)Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationKey responsibilities:Perform QA for 'data build tool (dbt)' code/loadsDevelop and execute test plans, test cases, and test scripts to ensure quality deliverablesIdentify, document, and track software defects to resolutionProvide feedback on usability and functionality.Conduct functional, regression, and performance testing. Technical Experience:Technical Skills: - Must To Have Skills: Proficiency in Data Warehouse ETL Testing and writing complex SQL.- Good To Have Skills: Experience with Data Building Tool, Functional Test Planning, Oracle Procedural Language Extensions to SQL (PLSQL).- Strong understanding of ETL/ELT processes and data warehouse concepts.- Experience in testing tools and methodologies.- Knowledge of SQL queries and database testing.- Ability to analyze and interpret complex data sets.- Must have experience working in agile teams Professional Attributes:Good communication skills Educational Qualification:Bachelor of engineering or equivalent degree Qualification 15 years full time education
Posted 1 week ago
4.0 - 6.0 years
20 - 25 Lacs
Noida, Pune, Chennai
Work from Office
We are seeking a skilled and detail-oriented Data Engineer with 4 to 6 years of hands-on experience in Microsoft Fabric , Snowflake , and Matillion . The ideal candidate will play a key role in supporting MS Fabric and migrating from MS fabric to Snowflake and Matillion. Roles and Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Matillion and integrate data from various sources. Architect and optimize Snowflake data warehouses, ensuring efficient data storage, querying, and performance tuning. Leverage Microsoft Fabric for end-to-end data engineering tasks, including data ingestion, transformation, and reporting. Collaborate with data analysts, scientists, and business stakeholders to deliver high-quality, consumable data products. Implement data quality checks, monitoring, and observability across pipelines. Automate data workflows and support CI/CD practices for data deployments. Troubleshoot performance bottlenecks and data pipeline failures with a root-cause analysis mindset. Maintain thorough documentation of data processes, pipelines, and architecture. trong expertise with: Microsoft Fabric (Dataflows, Pipelines, Lakehouse, Notebooks, etc.) Snowflake (warehouse sizing, SnowSQL, performance tuning) Matillion (ETL/ELT orchestration, job optimization, connectors) Proficiency in SQL and data modeling (dimensional/star schema, normalization). Experience with Python or other scripting languages for data manipulation. Familiarity with version control tools (e.g., Git) and CI/CD workflows. Solid understanding of cloud data architecture (Azure preferred). Strong problem-solving and debugging skills.
Posted 1 week ago
7.0 - 10.0 years
35 - 40 Lacs
Chennai
Hybrid
Program Manager Data - Chennai Forbes Advisor is a high-growth digital media and technology company that empowers consumers to make confident decisions about money, health, careers, and everyday life. Our global data organization builds modern, AI-augmented pipelines that turn information into revenue-driving insight. Job Description: Were hiring a Program Manager to orchestrate complex, cross-functional data initiatives from revenue-pipeline automation to analytics product launches. You'll be the connective tissue between Data Engineering, Analytics, RevOps, Product, and external partners, ensuring programs land on time, on scope, and with measurable impact. If you excel at turning vision into executable roadmaps, mitigating risk before it bites, and communicating clearly across technical and business audiences, wed love to meet you. Key Responsibilities: Own program delivery for multi-team data products (e.g., revenue-data pipelines, attribution models, partner-facing reporting APIs). Build and maintain integrated roadmaps, aligning sprint plans, funding, and resource commitments. Drive agile ceremonies (backlog grooming, sprint planning, retrospectives) and track velocity, burn-down, and cycle-time metrics. Create transparent status reportingrisks, dependencies, OKRstailored for engineers up to C-suite stakeholders. Proactively remove blockers by coordinating with Platform, IT, Legal/Compliance, and external vendors. Champion process optimization: intake, prioritization, change management, and post-mortems. Partner with RevOps and Media teams to ensure program outputs translate into revenue growth and faster decision making. Facilitate launch readinessQA checklists, enablement materials, go-live runbooks—so new data products land smoothly. Foster a culture of documentation, psychological safety, and continuous improvement within the data organization. Experience required: 7+ years program or project-management experience in data, analytics, SaaS, or high-growth tech. Proven success delivering complex, multi-stakeholder initiatives on aggressive timelines. Expertise with agile frameworks (Scrum/Kanban) and modern collaboration tools (Jira, Asana, Notion/Confluence, Slack). Strong understanding of data & cloud concepts (pipelines, ETL/ELT, BigQuery, dbt, Airflow/Composer). Excellent written and verbal communication—able to translate between technical teams and business leaders. Risk-management mindset: identify, quantify, and drive mitigation before issues escalate. Experience coordinating across time zones and cultures in a remote-first environment. Nice to Have Formal certification (PMP, PMI-ACP, CSM, SAFe, or equivalent). Familiarity with GCP services, Looker/Tableau, or marketing-data stacks (Google Ads, Meta, GA4). Exposure to revenue operations, performance marketing, or subscription/affiliate business models. Background in change-management or process-improvement methodologies (Lean, Six Sigma). Perks : Monthly long weekends—every third Friday off. Fitness and commute reimbursement. Remote-first culture with flexible hours and a high-trust environment. Opportunity to shape a world-class data platform inside a trusted global brand. Collaborate with talented engineers, analysts, and product leaders who value innovation and impact
Posted 1 week ago
3.0 - 7.0 years
11 - 15 Lacs
Gurugram
Work from Office
Overview We are seeking an experienced Data Modeller with expertise in designing and implementing data models for modern data platforms. This role requires deep knowledge of data modeling techniques, healthcare data structures, and experience with Databricks Lakehouse architecture. The ideal candidate will have a proven track record of translating complex business requirements into efficient, scalable data models that support analytics and reporting needs. About the Role As a Data Modeller, you will be responsible for designing and implementing data models for our Databricks-based Modern Data Platform. You will work closely with business stakeholders, data architects, and data engineers to create logical and physical data models that support the migration from legacy systems to the Databricks Lakehouse architecture, ensuring data integrity, performance, and compliance with healthcare industry standards. Key Responsibilities Design and implement logical and physical data models for Databricks Lakehouse implementations Translate business requirements into efficient, scalable data models Create and maintain data dictionaries, entity relationship diagrams, and model documentation Develop dimensional models, data vault models, and other modeling approaches as appropriate Support the migration of data models from legacy systems to Databricks platform Collaborate with data architects to ensure alignment with overall data architecture Work with data engineers to implement and optimize data models Ensure data models comply with healthcare industry regulations and standards Implement data modeling best practices and standards Provide guidance on data modeling approaches and techniques Participate in data governance initiatives and data quality assessments Stay current with evolving data modeling techniques and industry trends Qualifications Extensive experience in data modeling for analytics and reporting systems Strong knowledge of dimensional modeling, data vault, and other modeling methodologies Experience with Databricks platform and Delta Lake architecture Expertise in healthcare data modeling and industry standards Experience migrating data models from legacy systems to modern platforms Strong SQL skills and experience with data definition languages Understanding of data governance principles and practices Experience with data modeling tools and technologies Knowledge of performance optimization techniques for data models Bachelor's degree in Computer Science, Information Systems, or related field; advanced degree preferred Professional certifications in data modeling or related areas Technical Skills Data modeling methodologies (dimensional, data vault, etc.) Databricks platform and Delta Lake SQL and data definition languages Data modeling tools (erwin, ER/Studio, etc.) Data warehousing concepts and principles ETL/ELT processes and data integration Performance tuning for data models Metadata management and data cataloging Cloud platforms (AWS, Azure, GCP) Big data technologies and distributed computing Healthcare Industry Knowledge Healthcare data structures and relationships Healthcare terminology and coding systems (ICD, CPT, SNOMED, etc.) Healthcare data standards (HL7, FHIR, etc.) Healthcare analytics use cases and requirements Optionally Healthcare regulatory requirements (HIPAA, HITECH, etc.) Clinical and operational data modeling challenges Population health and value-based care data needs Personal Attributes Strong analytical and problem-solving skills Excellent attention to detail and data quality focus Ability to translate complex business requirements into technical solutions Effective communication skills with both technical and non-technical stakeholders Collaborative approach to working with cross-functional teams Self-motivated with ability to work independently Continuous learner who stays current with industry trends What We Offer Opportunity to design data models for cutting-edge healthcare analytics Collaborative and innovative work environment Competitive compensation package Professional development opportunities Work with leading technologies in the data space This position requires a unique combination of data modeling expertise, technical knowledge, and healthcare industry understanding. The ideal candidate will have demonstrated success in designing efficient, scalable data models and a passion for creating data structures that enable powerful analytics and insights.
Posted 1 week ago
15.0 - 20.0 years
16 - 20 Lacs
Gurugram, Bengaluru
Work from Office
Role Overview We are seeking a highly skilled and experienced Data Manager to lead the development, governance, and utilization of enterprise data systems. This is a strategic leadership role focused on ensuring seamless and secure flow of data across our platforms and teams, enabling timely and accurate access to actionable insights. The ideal candidate brings a strong foundation in data architecture, governance, and cloud-native systems, combined with hands-on experience managing cross-functional teams and implementing scalable, secure, and cost-efficient data solutions. Your Objectives Optimize data systems and infrastructure to support business intelligence and analytics. Implement best-in-class data governance, quality, and security frameworks. Lead a team of data and software engineers to develop, scale, and maintain cloud-native platforms. Support data-driven decision-making across the enterprise Key Responsibilities Develop and enforce policies for effective and ethical data management. Design and implement secure, efficient processes for data collection, storage, analysis, and sharing. Monitor and enhance data quality, consistency, and lineage. Oversee integration of data from multiple systems and business units. Partner with internal stakeholders to support data needs, dashboards, and ad hoc reporting. Maintain compliance with regulatory frameworks such as GDPR and HIPAA. Troubleshoot data-related issues and implement sustainable resolutions. Ensure digital data systems are secure from breaches and data loss. Evaluate and recommend new data tools, architectures, and technologies. Support documentation using Atlassian tools and develop architectural diagrams. Automate cloud operations using infrastructure as code (e.g., Terraform) and DevOps practices. Facilitate inter-team communication to improve data infrastructure and eliminate silos. Leadership & Strategic Duties Manage, mentor, and grow a high-performing data engineering team. Lead cross-functional collaboration with backend engineers, architects, and product teams. Facilitate partnerships with cloud providers (e.g., AWS) to leverage cutting-edge technologies. Conduct architecture reviews, PR reviews, and drive engineering best practices. Collaborate with business, product, legal, and compliance teams to align data operations with enterprise goals. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or related field. 1015 years of experience in enterprise data architecture, governance, or data platform development. Expertise in SQL, data modelling, and modern data tools (e.g., Snowflake, dbt, Fivetran). Deep understanding of AWS cloud services (Lambda, ECS, RDS, DynamoDB, S3, SQS). Proficient in scripting (Python, Bash) and CI/CD pipelines. Demonstrated experience with ETL/ELT orchestration (e.g., Airflow, Prefect). Strong understanding of DevOps, Terraform, containerization, and serverless computing. Solid grasp of data security, compliance, and regulatory requirements Preferred Experience (Healthcare Focused) Experience working in healthcare analytics or data environments. Familiarity with EHR/EMR systems such as Epic, Cerner, Meditech, or Allscripts. Deep understanding of healthcare data privacy, patient information handling, and clinical workflows Soft Skills & Team Fit Strong leadership and mentoring mindset. Ability to manage ambiguity and work effectively in dynamic environments. Excellent verbal and written communication skills with technical and non-technical teams. Passionate about people development, knowledge sharing, and continuous learning. Resilient, empathetic, and strategically focused.
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Coimbatore
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Kanpur
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
7.0 - 9.0 years
8 - 14 Lacs
Chandigarh
Work from Office
Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards
Posted 1 week ago
5.0 - 10.0 years
12 - 15 Lacs
Chennai
Work from Office
Role & responsibilities Design and implement data pipelines using ETL/ELT tools and techniques. Configure and manage data storage solutions, including relational databases, data warehouses, and data lakes. Develop and implement data quality checks and monitoring processes. Automate data platform deployments and operations using scripting and DevOps tools (e.g., Git, CI/CD pipeline). Ensuring compliance with data governance and security standards throughout the data platform development process. Troubleshoot and resolve data platform issues promptly and effectively. Collaborate with the Data Architect to understand data platform requirements and design specifications. Assist with data modelling and optimization tasks. Work with business stakeholders to translate their needs into technical solutions. • Document the data platform architecture, processes, and best practices. Stay up to date with the latest trends and technologies in full stack development, data engineering, and DevOps. Proactively suggest improvements and innovations for the data platform. Required Skillset ETL or ELT: AWS Glue/ Azure Data Factory/ Synapse/ Matillion/dbt. Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory). Data Visualization: Looker, Power BI, Tableau. SQL and Apache Spark / Python programming languages Containerization technologies (e.g., Docker, Kubernetes) Cloud Experience: AWS/Azure/GCP. Scripting and DevOps tools (e.g., Git, CI/CD pipeline) AWS Certification - AWS Foundational certificates or AWS Technical Certificates
Posted 1 week ago
6.0 - 10.0 years
16 - 25 Lacs
Kanpur
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 week ago
6.0 - 10.0 years
16 - 25 Lacs
Coimbatore
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 week ago
6.0 - 10.0 years
16 - 25 Lacs
Chandigarh
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 week ago
6.0 - 10.0 years
16 - 25 Lacs
Varanasi
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 week ago
2.0 - 6.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Design, develop, and maintain high-performance SQL and PL/SQL procedures, packages, and functions in Snowflake or other cloud database technologies. Apply advanced performance tuning techniques to optimize database objects, queries, indexing strategies, and resource usage. ; Develop code based on reading and understanding business and functional requirements following the Agile process Produce high-quality code to meet all project deadlines and ensuring the functionality matches the requirements ; Analyze and resolve issues found during the testing or pre-production phases of the software delivery lifecycle; coordinating changes with project team leaders and cross-work team members ; Provide technical support to project team members and responding to inquiries regarding errors or questions about programs Interact with architects, technical leads, team members and project managers as required to address technical and schedule issues. ; Suggest and implement process improvements for estimating, development and testing processes. Support the development of automated and repeatable processes for ETL/ELT, data integration, and data transformation using industry best practices. ; Support cloud migration and modernization initiatives, including re-platforming or refactoring legacy database objects for cloud-native platforms. ; BS Degree in Computer Science, Information Technology, Electrical/Electronic Engineering or another related field or equivalent ; A minimum of 7 years prior work experience working with an application and database development organization with deep expertise in Oracle PL/SQL or SQL Server T-SQL; must demonstrate experience delivering systems and projects from inception through implementation ; Proven experience writing and optimizing complex stored procedures, functions, and packages in relational databases such as Oracle, MySQL, SQL Server ; Strong knowledge of performance tuning, including query optimization, indexing, statistics, execution plans, and partitioning ; Understanding of data integration pipelines, ETL tools, and batch processing techniques. ; Possesses solid software development and programming skills, with an understanding of design patterns, and software development best practices ; Experience with Snowflake, Python scripting, and data transformation frameworks like dbt is a plus ; Work experience in developing Web Applications with Java, Java Script, HTML, JSPs. Experience with MVC frameworks Spring and Angular
Posted 1 week ago
6.0 - 10.0 years
16 - 25 Lacs
Lucknow
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 week ago
6.0 - 10.0 years
16 - 25 Lacs
Ludhiana
Work from Office
Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
navi mumbai, maharashtra
On-site
You are currently seeking a full-time, office-based Data Engineer to join the Information Technology team at our rapidly growing corporate activities. In this role, you will work collaboratively with a team on tasks and projects crucial to the company's success. If you are looking for a dynamic career opportunity to utilize your expertise and further develop and enhance your skills, this position is the perfect fit for you. Your responsibilities will include utilizing your skills in data warehousing, business intelligence, and databases such as Snowflake, ANSI SQL, SQL Server, and T-SQL. You will support programming/software development using ETL and ELT tools like dbt, Azure Data Factory, and SSIS. Designing, developing, enhancing, and supporting business intelligence systems primarily using Microsoft Power BI will be a key part of your role. Additionally, you will be responsible for collecting, analyzing, and documenting user requirements, participating in software validation processes, creating software applications following the software development lifecycle, and providing end-user support for applications. To qualify for this position, you should have a Bachelor's Degree in Computer Science, Data Science, or a related field, along with at least 3 years of experience in Data Engineering. Knowledge of developing dimensional data models, understanding of Star Schema and Snowflake schema designs, solid ETL development and reporting knowledge, and familiarity with Snowflake cloud data warehouse, Fivetran data integration, and dbt transformations are preferred. Proficiency in Python, REST API, SQL Server databases, and bonus knowledge of C#, Azure development is desired. Excellent analytical, written, and oral communication skills are essential for this role. Medpace is a full-service clinical contract research organization (CRO) dedicated to providing Phase I-IV clinical development services to the biotechnology, pharmaceutical, and medical device industries. With a mission to accelerate the global development of safe and effective medical therapeutics, Medpace leverages local regulatory and therapeutic expertise across various major areas. Headquartered in Cincinnati, Ohio, Medpace employs over 5,000 individuals across 40+ countries. At Medpace, you will be part of a team that makes a positive impact on the lives of patients and families facing various diseases. The work you do today will contribute to improving the lives of individuals living with illness and disease in the future. Medpace offers a flexible work environment, competitive compensation and benefits package, structured career paths for professional growth, company-sponsored employee appreciation events, and employee health and wellness initiatives. Medpace has been recognized by Forbes as one of America's Most Successful Midsize Companies and has received CRO Leadership Awards for expertise, quality, capabilities, reliability, and compatibility. If your qualifications align with the requirements of the position, a Medpace team member will review your profile, and if interested, you will be contacted with further details on the next steps. Join us at Medpace and be a part of a team driven by People, Purpose, and Passion to make a difference tomorrow.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You are an experienced Alteryx developer responsible for designing and developing new applications and enhancing existing models using Alteryx Design Studio. You will be involved in the entire Software Development Life Cycle (SDLC), requiring excellent communication skills and direct collaboration with the business. It is crucial that you are self-sufficient and adept at building internal networks within the business and technology teams. Your responsibilities include owning changes from inception to deployment, implementing new functionality, identifying process gaps for improvement, and focusing on scalability and stability. You must be results-oriented, self-motivated, and able to multitask across different teams and applications. Additionally, effective communication with remotely dispersed teams is essential for this role. Your technical expertise should include workflow enhancement, designing macros, integrating Alteryx with various tools, maintaining user roles in the Alteryx gallery, using version control systems like git, and working with multiple data sources compatible with Alteryx. You should possess advanced development and troubleshooting skills, document training and support, understand SDLC methodologies, have strong communication skills, be proficient in SQL database query tools, and comprehend data warehouse architecture. In addition to the technical requirements, you will need to have experience working in an Agile environment, managing ETL/ELT data load processes, knowledge of Cloud Infrastructure, and integration with data sources and relational databases. Being self-motivated, working independently, and collaborating as a team player are essential. Your analytical, problem-solving skills, ability to handle multiple stakeholders and queries, prioritize tasks, and meet prompt deadlines are crucial. A strong client service focus and willingness to respond promptly to queries and deliverables are expected. Preferred Skills: - DataAnalytics - Alteryx,
Posted 1 week ago
8.0 - 13.0 years
15 - 27 Lacs
Bengaluru
Hybrid
Job Description: We are seeking an experienced and visionary Senior Data Architect to lead the design and implementation of scalable enterprise data solutions. This is a strategic leadership role for someone who thrives in cloud-first, data-driven environments and is passionate about building future-ready data architectures. Key Responsibilities: Define and implement enterprise-wide data architecture strategy aligned with business goals. Design and lead scalable, secure, and resilient data platforms for both structured and unstructured data. Architect data lake/warehouse ecosystems and cloud-native solutions (Snowflake, Databricks, Redshift, BigQuery). Collaborate with business and tech stakeholders to capture data requirements and translate them into scalable designs. Mentor data engineers, analysts, and other architects in data best practices. Establish standards for data modeling, integration, and management. Drive governance across data quality, security, metadata, and compliance. Lead modernization and cloud migration efforts. Evaluate new technologies and recommend adoption strategies. Support data cataloging, lineage, and MDM initiatives. Ensure compliance with privacy standards (e.g., GDPR, HIPAA, CCPA). Required Qualifications: Bachelors/Master’s degree in Computer Science, Data Science, or related field. 10+ years of experience in data architecture; 3+ years in a senior/lead capacity. Hands-on experience with modern cloud data platforms: Snowflake, Azure Synapse, AWS Redshift, BigQuery, etc. Strong skills in data modeling tools (e.g., Erwin, ER/Studio). Deep understanding of ETL/ELT , APIs, and data integration. Expertise in SQL, Python , and data-centric languages. Experience with data governance, RBAC, encryption , and compliance frameworks. DevOps/CI-CD experience in data pipelines is a plus. Excellent communication and leadership skills.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough