Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 2 months ago
0.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Excellent communication and presentation skills. Extensive experience in Azure stack Azure Data bricks, Azure Synapse, ADLS, Azure SQL DB, Azure Data Factory, CosmoDB, Analysis Services, Event Hub etc.. Excellent experience in data processing using Azure Data bricks, complex data transformation using Pyspark or Python and building end to end data pipeline using Azure Data bricks Experience in job scheduling using Oozie or Airflow or any other ETL scheduler Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala. Good experience in designing & delivering data analytics solutions using Azure Cloud native services. Good experience in Requirements Analysis and Solution Architecture Design, Data modelling, ETL, data integration and data migration design Documentation of solutions (e.g. data models, configurations, and setup). Well versed with Waterfall, Agile, Scrum and similar project delivery methodologies. Experienced in internal as well as external stakeholder management
Posted 2 months ago
5.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Azure backend expert (ADLS, ADF and Azure SQL DW)4+Yrs/Immediate Joiners only One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models. Skills: adf,sql,adls,azure,azure sql dw
Posted 3 months ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python
Posted 3 months ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
Responsibilities Create and manage scalable data pipelines to collect, process, and store large volumes of data from various sources Integrate data from multiple sources, ensuring consistency, quality, and reliability Design, implement, and optimize database schemas and structures to support data storage and retrieval Develop and maintain ETL (Extract, Transform, Load) processes to accurately and efficiently move data between systems Build and maintain data warehouses to support business intelligence and analytics needs Optimize data processing and storage performance for efficient resource utilization and quick retrieval Create and maintain comprehensive documentation for data pipelines, ETL processes, and database schemas Monitor data pipelines and systems for performance and reliability, troubleshooting and resolving issues as they arise Stay up to date with emerging technologies and best practices in data engineering, evaluating and recommending new tools as appropriate Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (Engineering or Math preferred) 5+ years of experience with SQL, Python, .NET, SSIS, and SSAS 2+ years of experience with Azure cloud services, particularly SQL Server, ADF, Azure Databricks, ADLS, Key Vault, Azure Functions, and Logic Apps, with an emphasis on Databricks 2+ years of experience using Git and deploying code using a CI/CD approach Strong analytical and problem-solving skills Excellent communication and interpersonal skills Ability to work independently and as part of a team Attention to detail and a commitment to quality
Posted 3 months ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
One Azure backend expert (Strong SC or Specialist Senior) Should have hands-on experience of working with ADLS, ADF and Azure SQL DW Should have minimum 3 Years working experience of delivering Azure projects. Must Have:- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation. Optimize and tune Databricks jobs for performance and scalability. Experience with Scala and/or Python programming languages. Proficiency in SQL for querying and managing data. Expertise in ETL (Extract, Transform, Load) processes. Knowledge of data modeling and data warehousing concepts. Implement best practices for data pipelines, including monitoring, logging, and error handling. Excellent problem-solving skills and attention to detail. Excellent written and verbal communication skills Strong analytical and problem-solving abilities. Experience in version control systems (e.g., Git) to manage and track changes to the codebase. Document technical designs, processes, and procedures related to Databricks development. Stay current with Databricks platform updates and recommend improvements to existing process. Good to Have:- Agile delivery experience. Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP). Knowledge of Agile and Scrum Software Development Methodologies. Understanding of data lake architectures. Familiarity with tools like Apache NiFi, Talend, or Informatica. Skills in designing and implementing data models.
Posted 3 months ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Hybrid
We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Architect and develop secure REST APIs in C# to support advanced attribution models and marketing analytics pipelines. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with C# and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.
Posted 3 months ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Hybrid
We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in C# development, Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize Fabric and OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience with Fabric and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Proficiency with Azure Cloud technologies , especially Cosmos DB, Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.
Posted 3 months ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Hybrid
We are looking for a highly skilled API & Pixel Tracking Integration Engineer to lead the development and deployment of server-side tracking and attribution solutions across multiple platforms. The ideal candidate brings deep expertise in CAPI integrations (Meta, Google, and other platforms), secure data handling using cryptographic techniques, and experience working within privacy-first environments like Azure Clean Rooms . This role requires strong hands-on experience in Azure cloud services, OCI (Oracle Cloud Infrastructure) , and marketing technology stacks including Adobe Tag Management and Pixel Management . You will work closely with engineering, analytics, and marketing teams to deliver scalable, compliant, and secure data tracking solutions that drive business insights and performance. Key Responsibilities: Design, implement, and maintain CAPI integrations across Meta, Google, and all major platforms , ensuring real-time and accurate server-side event tracking. Utilize OCI environments as needed for data integration and marketing intelligence workflows. Develop and manage custom tracking solutions leveraging Azure Clean Rooms , ensuring user NFAs are respected, and privacy-compliant logic is implemented. Implement cryptographic hashing (e.g., SHA-256) Use Azure Data Lake Gen1 & Gen2 (ADLS) , Cosmos DB , and Azure Functions to build and host scalable backend systems. Integrate with Azure Key Vaults to securely manage secrets and sensitive credentials. Design and execute data pipelines in Azure Data Factory (ADF) for processing and transforming tracking data. Lead pixel and tag management initiatives using Adobe Tag Manager , including pixel governance and QA across properties. Collaborate with security teams to ensure all data-sharing and processing complies with Azures data security standards and enterprise privacy frameworks. Monitor, troubleshoot, and optimize existing integrations using logs, diagnostics, and analytics tools. Required Skills: Strong hands-on experience in Python and building scalable APIs. Experience in implementing Meta CAPI , Google Enhanced Conversions , and other platform-specific server-side tracking APIs. Proficiency with Azure Cloud technologies , Azure Functions, ADF, Key Vault, ADLS , and Azure security best practices . Knowledge of Azure Clean Rooms , with experience developing custom logic and code for clean data collaborations . Familiarity with OCI for hybrid-cloud integration scenarios. Understanding of cryptography and secure data handling (e.g., hashing email addresses with SHA-256). Experience with Adobe Tag Management , specifically in pixel governance and lifecycle. Proven ability to collaborate across functions, especially with marketing and analytics teams. Soft Skills: Strong communication skills to explain technical concepts to non-technical stakeholders. Proven ability to collaborate across teams, especially with marketing, product, and data analytics. Adaptable and proactive in learning and applying evolving technologies and regulatory changes.
Posted 3 months ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor&rsquos Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training .
Posted 3 months ago
5.0 - 8.0 years
3 - 7 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Inviting Application for Azure Data Engineer Experience - 5 to 8 Yrs Joining Location - Chennai JD- Required Technical Skill Set - ADB,ADF 3+ years of relevant experience in Pyspark and Azure Databricks. Proficiency in integrating, transforming, and consolidating data from various structured and unstructured data sources. Good experience in SQL or native SQL query languages. Strong experience in implementing Databricks notebooks using Python. Good experience in Azure Data Factory, ADLS, Storage Services, Serverless architecture, Azure functions.
Posted 3 months ago
5.0 - 10.0 years
15 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).
Posted 3 months ago
6.0 - 8.0 years
7 - 11 Lacs
Gurugram
Work from Office
DISCOVER your opportunity What will your essential responsibilities include? Possess excellent domain knowledge of Data warehousing technologies, SQL, Data Models to develop test strategies, approaches from Quality Engineering perspective. In close coordination with Project teams help lead all efforts from Quality Engineering perspective. Work with data engineers or data scientists to collect and prepare the necessary test data sets. Ensure the data adequately represents real-world scenarios and covers a diverse range of inputs. Excellent domain knowledge of Data warehousing technologies, SQL, Data Models to build out test strategies and lead projects from Quality Engineering perspective. With an Automation-first mindset, work towards testing of user interfaces such as Business Intelligence solutions and validation of functionalities while constantly looking out for efficiency gains and process improvements. Triage and Prioritization of stories and epics with all stakeholders to ensure optimal deliveries. Engage with various stakeholders like Business Partners, Product Owners, Development and Infrastructure teams to ensure alignments with overall roadmap. Track current progress of testing activities, finding and tracking test metrics, estimating and communicating improvement actions based on the test metrics results and the experience. Automation for processes such as Data Loads, user interfaces such as Business Intelligence solutions and other validations of business KPIs. Adopt and implement best practices towards Documentation of test plan, cases, results in JIRA. Triage and Prioritization of defects with all stakeholders. Leadership accountability for ensuring that every release to customers is fit for purpose, performant. Knowledge on Scaled Agile, Scrum or Kanban methodology. You will report to Lead UAT. SHARE your talent Were looking for someone who has these abilities and skills: Required Skills and Abilities: A minimum of a bachelors or master's degree (preferred) in a relevant discipline Relevant years of excellent testing background, including knowledge/experience in automation. Insurance experience in data, underwriting, claims or operations, including influencing, collaborating, and leading efforts in complex, disparate, and interrelated teams. Excellent Experience with SQL Server, Azure Databricks Notebook, PowerBI, ADLS, CosmosDB, SQL DW Analytics. Should have a robust background in Software development with experience in ingesting, transforming, and storing data from large datasets using Pyspark in Azure Databricks with robust knowledge of distributed computing concepts. Hands-on experience in designing and developing ETL Pipelines in Pyspark in Azure Databricks with robust python scripting. Desired Skills and Abilities: Having experience doing UAT/System Integration testing in the insurance industry Excellent technical testing experience such as API testing, UI automation is a plus. Knowledge/Experience of Testing in cloud-based systems in different data staging layers
Posted 3 months ago
3.0 - 7.0 years
9 - 16 Lacs
Remote, , India
On-site
Job Role: CDP Data Engineer Why MResult Founded in 2004, MResult is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. MResult's expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges. What We Offer: At MResult, you can leave your mark on projects at the world's most recognized brands, access opportunities to grow and upskill, and do your best work with the flexibility of hybrid work models. Great work is rewarded, and leaders are nurtured from within. Our values Agility, Collaboration, Client Focus, Innovation, and Integrity are woven into our culture, guiding every decision. Website: https://mresult.com/ LinkedIn: https://www.linkedin.com/company/mresult/ What This Role Requires In the role of CDP Data Engineer you will be a key contributor to MResult's mission of empowering our clients with data-driven insights and innovative digital solutions. Each day brings exciting challenges and growth opportunities. Here is what you will do: Design, develop, and implement solutions using Customer Data Platform (CDP) to manage and analyze customer data. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Integrate CDP with various data sources and ensure seamless data flow and accuracy. Develop and maintain data pipelines, ensuring data is collected, processed, and stored efficiently. Create and manage customer profiles, segments, and audiences within the CDP. Implement data governance and security best practices to protect customer data. Monitor and optimize the performance of the CDP infrastructure. Provide technical support and troubleshooting for CDP-related issues. Stay updated with the latest trends and advancements in CDP technology and best practices. Key Skills to Succeed in This Role: Overall experience 3-6 yrs Experience in Customer Insights Data. Experience in customer insights journey. Experience in ADLS, ADF & Synapse is a must. Experience in data verse, Power platform and Snowflake. Manage, Master, and Maximize with MResult MResult is an equal-opportunity employer committed to building an inclusive environment free of discrimination and harassment. Take the next step in your career with MResult where your ideas help shape the future.
Posted 3 months ago
5 - 9 years
22 - 32 Lacs
Noida, Kolkata, Hyderabad
Hybrid
Good Experience in: Hadoop, SQL, Azure (ADF, ADB, ADLS, Log Analytics, Logic App, Key Vault, Blob Storage) 79 Years Old Reputed MNC Company
Posted 4 months ago
12 - 15 years
15 - 17 Lacs
Bengaluru
Work from Office
About The Role Overview Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping peoples careers and being thoughtful about employee wellbeing. Boeing India Software Engineering team is currently looking for one Lead Software Engineer Developer to join their team in Bengaluru, KA. As a ETL Developer , you will be part of the Application Solutions team, which develops software applications and Digital products that create direct value to its customers. We provide re-vamped work environments focused on delivering data-driven solutions at a rapidly increased pace over traditional development. Be a part of our passionate and motivated team who are excited to use the latest in software technologies for modern web and mobile application development. Through our products we deliver innovative solutions to our global customer base at an accelerated pace. Position Responsibilities: Perform data mining and collection procedures. Ensure data quality and integrity, Interpret and analyze data problems. Visualize data and create reports. Experiments with new models and techniques Determines how data can be used to achieve customer / user goals. Designs data modeling processes Create algorithms and predictive models to for analysis. Enables development of prediction engines, pattern detection analysis, and optimization algorithms, etc. Develops guidance for analytics-based wireframes. Organizes and conducts data assessments. Discovers insights from structured and unstructured data. Estimate user stories/features (story point estimation) and tasks in hours with the required level of accuracy and commit them as part of Sprint Planning. Contributes to the backlog grooming meetings by promptly asking relevant questions to ensure requirements achieve the right level of DOR. Raise any impediments/risks (technical/operational/personal) they come across and approaches Scrum Master/Technical Architect/PO accordingly to arrive at a solution. Update the status and the remaining efforts for their tasks on a daily basis. Ensures change requests are treated correctly and tracked in the system, impact analysis done, and risks/timelines are appropriately communicated. Hands-on experience in understanding aerospace domain specific data Must coordinate with data scientists in data preparation, exploration and making data ready. Must have clear understanding of defining data products and monetizing. Must have experience in building self-service capabilities to users. Build quality checks across the data lineage and responsible in designing and implementing different data patterns. Can influence different stakeholders for funding and building the vision of the product in terms of usage, productivity, and scalability of the solutions. Build impactful or outcome-based solutions/products. Basic Qualifications (Required Skills/Experience): Bachelors or masters degree as BASIC QUALIFICATION 12-15 years of experience as a data engineer. Expertise in SQL, Python, Knowledge of Java, Oracle, R, Data modeling, Power BI. Experience in understanding and interacting with multiple data formats. Ability to rapidly learn and understand software from source code. Expertise in understanding, analyzing & optimizing large, complicated SQL statements Strong knowledge and experience in SQL Server, database design and ETL queries. Develop software models to simulate real world problems to help operational leaders understand on which variables to focus. Candidate should have proficiency to streamline and optimize databases for efficient and consistent data consumption. Strong understanding of Datawarehouse concepts, data lake, data mesh Familiar with ETL tools and Data ingestion patterns Hands on experience in building data pipelines using GCP. Hands on experience in writing complex SQL (No- SQL is a big plus) Hands on experience with data pipeline orchestration tools such as Airflow/GCP Composer Hands on experience on Data Modelling Experience in leading teams with diversity Experience in performance tuning of large datawarehouse/datalakes. Exposure to prompt engineering, LLMs, and vector DB. Python, SQL and Pyspark Spark Ecosystem (Spark Core, Spark Streaming, Spark SQL) / Databricks Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) Preferred Qualifications [Required Skills/Experience] PubSUB, Terraform Deep Learning - Tensor flow Time series, BI/Visualization Tools - Power BI and Tablaeu, Languages - R/Phython Deep Learning - Tensor flow Machine Learning NLP Typical Education & Experience Education/experience typically acquired through advanced education (e.g. Bachelor) and typically 12 to 15 years' related work experience or an equivalent combination of education and experience (e.g. Master+11 years of related work experience etc.) Relocation This position does offer relocation within INDIA. Export Control Requirements This is not an Export Control position. Education Bachelor's Degree or Equivalent Required Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India)
Posted 4 months ago
2.0 - 3.0 years
3 - 4 Lacs
hyderabad
Remote
A highly skilled Senior Data Engineer is sought to join the team. The candidate should have expertise in Azure Databricks, PySpark, SQL, and other Azure data services.
Posted Date not available
3.0 - 7.0 years
10 - 18 Lacs
mumbai
Hybrid
Role & responsibilities Designing and Building Data Pipelines: Creating robust, scalable, and efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines to move data from various sources into data warehouses, data lakes, or other storage systems. Ingest data which is structured, unstructured. Data Storage and Management: Selecting and managing appropriate data storage solutions (e.g., relational databases, S3, ADLS, data warehouses like SQL, Databricks. Data Architecture: Understand target data models, schemas, and database structures that support business requirements and data analysis needs. Data Integration: Connecting disparate data sources, ensuring data consistency and quality across different systems. Performance Optimization: Optimizing data processing systems for speed, efficiency, and scalability, often dealing with large source systems datasets. Data Governance and Security: Implementing measures for data quality, security, privacy, and compliance with regulations. Collaboration: Working closely with Data Scientists, Data Analysts, Business Intelligence Developers, and other stakeholders to understand their data needs and provide them with clean, reliable data. Automation: Automating data processes and workflows to reduce manual effort and improve reliability. Preferred candidate profile 4 - 6 years of experience as an Data Engineer . ETL/ELT Tools: Experience with data integration tools and platforms like SSIS, Azure Data Factory SSIS Package Development Control Flow: Designing and managing the workflow of ETL processes, including tasks, containers, and precedence constraints. Data Flow: Building pipelines for extracting data from sources, transforming it using various built-in components SQL Server Management Studio (SSMS): For database administration, querying, and managing SSIS packages. SQL Server Data Tools (SSDT) / Visual Studio: The primary IDE for developing SSIS packages. Scripting (C# or VB.NET): For advanced transformations, custom components, or complex logic that cannot be achieved with built-in SSIS components. Programming Languages: Advantage if experience on either of Python / Java Scala basics Cloud Platforms: Proficiency with cloud data services from providers like SSIS, Microsoft Azure (Azure Data Lake, Azure Data Factory) etc Data Warehousing: Understanding of data warehousing concepts, dimensional modelling, and schema design. Version Control: Familiarity with Git and collaborative development workflows.
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |