Home
Jobs

1693 Data Engineering Jobs - Page 47

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 5.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Coeo are trusted data management and analytics experts, delivering technology strategy and support for business The team have deep technical and commercial experience working with Microsoft Data Services to help our clients optimise their costs and maximise the benefits from their investments in these technologies Coeo have a strong emphasis on consulting skills, and we expect our team members to be customer facing and have a growth mind-set This role sits within our consulting team and we have clear expectations that our team members understand the importance of personal utilisation and have the ability to spot opportunities within our clients that can be passed back to our business development team. Coeo has been established for over 14 years and has exclusively focused on Microsoft technologies Our mission is to help our clients predict their future through the better use of data, technology, people and processes To do this our business has always focused on: Managed Services Database Consultancy Data Engineering and Analytics Consultancy Adoption and Change Management There has never been a more exciting time to join us, were a fast-growing professional services and managed services business Due to consistent growth, it is enabling us to expand out our project management and delivery teams. Role Overview: We are looking for an experienced Data Engineer with a strong background in SQL Server, SSIS, and Data Warehousing This role will involve developing and optimizing ETL pipelines, designing data models, and delivering scalable, high-performance data solutions that support analytics and business intelligence. Key Responsibilities: Design and maintain ETL processes using SQL Server Integration Services (SSIS). Work with SQL Server to create and optimize queries and stored procedures. Build and manage data warehouses to support reporting and analytics. Develop scalable data pipelines to support business needs. Collaborate with stakeholders to gather requirements and deliver data solutions. Monitor and optimize database and pipeline performance. Implement data management and ETL best practices. Required Skills: Strong expertise in SQL Server and SSIS. In-depth understanding of Data Warehousing and data modeling concepts. Ability to design and optimize stored procedures, functions, and complex queries. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Familiarity with cloud platforms (Azure, AWS, GCP) and data lake architecture. Experience with big data tools (Hadoop, Spark) is a plus. Mentoring or leadership experience is a bonus. Education & Experience: Bachelor's degree in Computer Science or a related field. Several years of hands-on experience in SQL-based data engineering. Additional Information: Hybrid working with flexible office visits in Hyderabad. Competitive compensation package with benefits such as healthcare, Gym pass, and more. Supportive and inclusive culture with career progression opportunities. Apply via our Careers page or visit our LinkedIn, Facebook, and Twitter profiles for more about Coeo. Diversity and Inclusion: Coeo is an equal opportunity employer committed to diversity and inclusion All qualified applicants will be considered.

Posted 4 weeks ago

Apply

5.0 - 8.0 years

17 - 20 Lacs

Kolkata

Work from Office

Naukri logo

Key Responsibilities Architect and implement scalable data solutions using GCP (BigQuery, Dataflow, Pub/Sub, Cloud Storage, Composer, etc.) and Snowflake. Lead the end-to-end data architecture including ingestion, transformation, storage, governance and consumption layers. Collaborate with business stakeholders, data scientists and engineering teams to define and deliver enterprise data strategy. Design robust data pipelines (batch and real-time) ensuring high data quality, security and availability. Define and enforce data governance, data cataloging and metadata management best practices. Evaluate and select appropriate tools and technologies to optimize data architecture and cost efficiency. Mentor junior architects and data engineers, guiding them on design best practices and technology standards. Collaborate with DevOps teams to ensure smooth CI/CD pipelines and infrastructure automation for data Skills & Qualifications : 3+ years of experience in data architecture, data engineering, or enterprise data platform roles. 3+ years of hands-on experience in Google Cloud Platform (especially BigQuery, Dataflow, Cloud Composer, Data Catalog). 3+ years of experience designing and implementing Snowflake-based data solutions. Deep understanding of modern data architecture principles (Data Lakehouse, ELT/ETL, Data Mesh, etc.). Proficient in Python, SQL and orchestration tools like Airflow / Cloud Composer. Experience in data modeling (3NF, Star, Snowflake schemas) and designing data marts and warehouses. Strong understanding of data privacy, compliance (GDPR, HIPAA) and security principles in cloud environments. Familiarity with tools like dbt, Apache Beam, Looker, Tableau, or Power BI is a plus. Excellent communication and stakeholder management skills. GCP or Snowflake certification preferred (e.g., GCP Professional Data Engineer, SnowPro Qualifications : Experience working with hybrid or multi-cloud data strategies. Exposure to ML/AI pipelines and support for data science workflows. Prior experience in leading architecture reviews, PoCs and technology roadmaps

Posted 4 weeks ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Chandigarh

Work from Office

Naukri logo

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 4 weeks ago

Apply

8.0 - 11.0 years

20 - 25 Lacs

Mumbai

Work from Office

Naukri logo

About This Role Aladdin Data: BlackRock is one of the worlds leading asset management firms and Aladdinis the firms an end-to-end operating system for investment professionals to see their whole portfolio, understand risk exposure, and act with precision Aladdin is our operating platform to manage financial portfolios It unites client data, operators, and technology needed to manage transactions in real time through every step of the investment process, Aladdin Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a key component of our competitive advantage Our mission is to deliver critical insights to our stakeholders, enabling them to make data-driven decisions, BlackRocks Data Operations team is at the heart of our data ecosystem, ensuring seamless data pipeline operations across the firm Within this team, the Process Engineering group focusses on building tools to enhance observability, improve operator experience, streamline operations, and provide analytics that drive continuous improvement across the organization, Key Responsibilities Strategic Leadership Drive the roadmap for process engineering initiatives that align with broader Data Operations and enterprise objectives, Partner on efforts to modernize legacy workflows and build scalable, reusable solutions that support operational efficiency, risk reduction, and enhanced observability, Define and track success metrics for operational performance and process health across critical data pipelines, Process Engineering & Solutioning Design and develop tools and products to support operational efficiency, observability, risk management, and KPI tracking, Define success criteria for data operations in collaboration with stakeholders across teams, Break down complex data challenges into scalable, manageable solutions aligned with business needs, Proactively identify operational inefficiencies and deliver data-driven improvements, Data Insights & Visualization Design data science solutions to analyze vendor data trends, identify anomalies, and surface actionable insights for business users and data stewards, Develop and maintain dashboards (e-g, Power BI, Tableau) that provide real-time visibility into vendor data quality, usage patterns, and operational health, Create metrics and KPIs that measure vendor data performance, relevance, and alignment with business needs, Quality Control & Data Governance Build automated QC frameworks and anomaly detection models to validate data integrity across ingestion points, Work with data engineering and governance teams to embed robust validation rules and control checks into pipelines, Reduce manual oversight by building scalable, intelligent solutions that detect, report, and in some cases self-heal data issues, Testing & Quality Assurance Collaborate with data engineering and stewardship teams to validate data integrity throughout ETL processes, Lead the automation of testing frameworks for deploying new datasets or new pipelines, Collaboration & Delivery Work closely with internal and external stakeholders to align technical solutions with business objectives, Communicate effectively with both technical and non-technical teams, Operate in an agile environment, managing multiple priorities and ensuring timely delivery of high-quality data solutions, Experience & Education 8+ years of experience in data engineering, data operations, analytics, or related fields, with at least 3 years in a leadership or senior IC capacity, Bachelor's or Masters degree in a quantitative field (Computer Science, Data Science, Statistics, Engineering, or Finance), Experience working with financial market data providers (e-g, Bloomberg, Refinitiv, MSCI) is highly valued, Proven track record of building and deploying ML models, Technical Expertise Deep proficiency in SQL and Python, with hands-on experience in data visualization (Power BI, Tableau), cloud data platforms (e-g, Snowflake), and Unix-based systems, Exposure to modern frontend frameworks (React JS) and microservices-based architectures is a strong plus, Familiarity with various database systems (Relational, NoSQL, Graph) and scalable data processing techniques, Leadership & Communication Skills Proven ability to lead cross-functional teams and influence without authority in a global matrixed organization, Exceptional communication skills, with a track record of presenting complex technical topics to senior stakeholders and non-technical audiences, Strong organizational and prioritization skills, with a results-oriented mindset and experience in agile project delivery, Preferred Qualifications Certification in Snowflake or equivalent cloud data platforms Certification in Power BI or other analytics tools Experience leading Agile teams and driving enterprise-level transformation initiatives Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about, Our hybrid work model BlackRocks hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week Some business groups may require more time in the office due to their roles and responsibilities We remain focused on increasing the impactful moments that arise when we work together in person aligned with our commitment to performance and innovation As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock, About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being Our clients, and the people they serve, are saving for retirement, paying for their childrens educations, buying homes and starting businesses Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress, This mission would not be possible without our smartest investment the one we make in our employees Its why were dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive, For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: linkedin,com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law,

Posted 4 weeks ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Ahmedabad

Work from Office

Naukri logo

Job Description The Data Scientist Operations will play a key role in transforming operational processes through advanced analytics and data-driven decision-making This role focuses on optimizing supply chain, manufacturing, and overall operations by developing predictive models, streamlining workflows, and uncovering insights to enhance efficiency and reduce costs, Key Responsibilities Advanced Analytics and Data Modeling Develop predictive models for demand forecasting, inventory optimization, and supply chain resilience, Leverage machine learning techniques to optimize production schedules, logistics, and procurement, Build algorithms to predict and mitigate risks in operational processes, Operational Efficiency Analyze manufacturing and supply chain data to identify bottlenecks and recommend process improvements, Implement solutions for waste reduction, cost optimization, and improved throughput, Conduct root cause analysis for operational inefficiencies and develop actionable insights, Collaboration with Stakeholders Partner with operations, supply chain, and procurement teams to understand analytical needs and deliver insights, Collaborate with IT and data engineering teams to ensure data availability and accuracy, Present findings and recommendations to non-technical stakeholders in an accessible manner, Data Management and Tools Work with large datasets to clean, preprocess, and analyze data Location(s) Ahmedabad Venus Stratum GCC Kraft Heinz is an Equal Opportunity Employer Underrepresented Ethnic Minority Groups / Women / Veterans / Individuals with Disabilities/Sexual Orientation/Gender Identity and other protected classes,

Posted 4 weeks ago

Apply

5.0 - 10.0 years

12 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Data Engineer - Python - 5+ Years - Bengaluru Are you a seasoned Data Engineer with expertise in Python and a passion for driving impactful solutions? Our client, a leading organization in Bengaluru, is seeking a Lead Data Engineer to spearhead their data engineering initiatives. If you have 5+ years of experience in data engineering and possess strong Python skills, this opportunity is perfect for you. Location : Bengaluru Your Future Employer : Our client is a prominent organization committed to driving innovation and leveraging data to make informed business decisions. With a focus on fostering an inclusive and collaborative work environment, they offer an excellent platform for professionals to thrive and contribute to meaningful projects. Responsibilities Designing and developing scalable data pipelines and ETL processes using Python Collaborating with cross-functional teams to understand data requirements and deliver effective solutions Mentoring and guiding junior team members in data engineering best practices Contributing to architectural decisions and driving continuous improvements in data management Requirements 5+ years of experience in data engineering with expertise in Python Strong proficiency in building and optimizing data pipelines and ETL processes Hands-on experience with cloud platforms such as AWS, Azure, or GCP Proven ability to work in an Agile environment and drive innovative solutions Excellent communication skills and the ability to collaborate with stakeholders at all levels What's in it for you : As the Lead Data Engineer, you will have the opportunity to lead impactful projects, mentor a talented team, and contribute to the organization's data-driven strategy. This role offers a competitive compensation package, a dynamic work culture, and the chance to make a significant impact in a forward-thinking organization. Reach us : If you feel this opportunity is well aligned with your career progression plans, please feel free to reach me with your updated profile at rohit.kumar@crescendogroup.in Disclaimer : Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status. Note : We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at www.crescendo-global.com and report fraud immediately. Stay alert! Profile keywords : Data Engineer, Python, ETL, Data Pipelines, Cloud Platforms, AWS, Azure, GCP, Agile, Data Management

Posted 4 weeks ago

Apply

10.0 - 12.0 years

25 - 27 Lacs

Indore, Hyderabad, Pune

Work from Office

Naukri logo

We are seeking a skilled Lead Data Engineer with extensive experience in Snowflake, ADF, SQL, and other relevant data technologies to join our team. As a key member of our data engineering team, you will play an instrumental role in designing, developing, and managing data pipelines, working closely with cross-functional teams to drive the success of our data initiatives. Key Responsibilities: Design, implement, and maintain data solutions using Snowflake, ADF, and SQL Server to ensure data integrity, scalability, and high performance. Lead and contribute to the development of data pipelines, ETL processes, and data integration solutions, ensuring the smooth extraction, transformation, and loading of data from diverse sources. Work with MSBI, SSIS, and Azure Data Lake Storage to optimize data flows and storage solutions. Collaborate with business and technical teams to identify project needs, estimate tasks, and set intermediate milestones to achieve final outcomes. Implement industry best practices related to Business Intelligence and Data Management, ensuring adherence to usability, design, and development standards. Perform in-depth data analysis to resolve data issues and improve overall data quality. Mentor and guide junior data engineers, providing technical expertise and supporting the development of their skills. Effectively collaborate with geographically distributed teams to ensure project goals are met in a timely manner. Required Technical Skills: T-SQL, SQL Server, MSBI (SQL Server Integration Services, Reporting Services), Snowflake, Azure Data Factory (ADF), SSIS, Azure Data Lake Storage. Proficient in designing and developing data pipelines, data integration, and data management workflows. Strong understanding of Cloud Data Solutions, with a focus on Azure-based tools and technologies. Nice to Have: Experience with Power BI for data visualization and reporting. Familiarity with Azure Databricks for data processing and advanced analytics. Mandatory Key Skills Azure Data Lake Storage,Business Intelligence,Data Management,T-SQL,Power BI,Azure Databricks,Cloud Data Solutions,Snowflake*,ADF*,SQL Server*,MSBI*,SSIS*

Posted 4 weeks ago

Apply

7.0 - 9.0 years

19 - 22 Lacs

Chennai

Work from Office

Naukri logo

This role is for 7+ years experienced Software Engineer with data engineering knowledge and following skill set. 1.) End 2 End Full Stack 2.) GCP - Services like Big Query, Astronomer, Terraform, Airflow, Data flow, GCP Architecture 3.) Python Fullstack Java with Cloud Mandatory Key Skills Software Engineering,Big Query,Terraform,Airflow,Data flow,GCP Architecture,Java,Cloud,data engineering*

Posted 4 weeks ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Naukri logo

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 4 weeks ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Agra

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.

Posted 4 weeks ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Surat

Work from Office

Naukri logo

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.

Posted 4 weeks ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

Chennai

Hybrid

Naukri logo

Hi, We have vacancy for Sr. Data engineer. We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call Hemanth 9715166618 for more info Thanks, Hemanth 9715166618

Posted 4 weeks ago

Apply

4.0 - 6.0 years

7 - 9 Lacs

Chennai

Work from Office

Naukri logo

What youll be doing Were seeking a skilled Data Engineering Analyst to join our high-performing team and propel our telecom business forward. Youll contribute to building cutting-edge data products and assets for our wireless and wireline operations, spanning areas like consumer analytics, network performance, and service assurance. In this role, you will develop deep expertise in various telecom domains. As part of the Data Architecture Strategy team, youll collaborate closely with IT and business stakeholders to design and implement user-friendly, robust data product solutions. This includes incorporating data classification and governance principles. Your responsibilities encompass Collaborate with stakeholders to understand data requirements and translate them into efficient data models Design, develop, and implement data architecture solutions on GCP and Teradata to support our Telecom business. Design data ingestion for both real-time and batch processing, ensuring efficient and scalable data acquisition for creating an effective data warehouse. Maintain meticulous documentation, including data design specifications, functional test cases, data lineage, and other relevant artifacts for all data product solution assets. Implement data architecture standards, as set by the data architecture team. Proactively identify opportunities for automation and performance optimization within your scope of work Collaborate effectively within a product-oriented organization, providing data expertise and solutions across multiple business units. Cultivate strong cross-functional relationships and establish yourself as a subject matter expert in data and analytics within the organization. What were looking for... Youre curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. Youll need to have Bachelors degree with four or more years of work experience. Four or more years of relevant work experience. Expertise in building complex SQLs to do data analysis to understand and design data solutions Experience with ETL, Data Warehouse concepts and Data Management life cycle Experience in creating technical documentation such as Source to Target mapping, Source contract, SLA's etc Experience in any DBMS, preferably GCP/BigQuery Experience in creating Data models using Erwin tool Experience in shell scripting and python Understanding of git version control and basic git command Understanding of Data Quality concepts Even better if you have one or more of the following Certification in GCP-Data Engineer. Understanding of NO SQL databases like Cassandra, Mongo etc Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to leaders and influencing stakeholders.

Posted 4 weeks ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Bengaluru

Remote

Naukri logo

Dear Candidate , Greetings from ADPMN!! Company profile:ADPMN was incorporated in the year 2019. However, with the undersigneds decades of experience in Software Industry, the network he has cultivated, and his technical expertise, ADPMN is bound to grow steadily and quickly. It provides quality software development services with emphasis on meeting the unique business needs of its clients. it has capacity to provide consulting services for complex projects and it handles its client needs for areas that require information technology expertise. Software and information technology applications have become part of every domain and a competent service provider in this area must have a workforce that has insight into the areas that seek application of the technology to design software, web applications and databases. For More Details: (https://adpmn.com/) Position Overview Job Title: Junior Data Engineer Experience: 4Years 8Years Location: Remote Employment Type: Full-Time Job Summary: We are seeking a highly motivated Junior Data Engineer to join our data engineering team. The ideal candidate will have foundational experience and strong knowledge of Azure cloud services , particularly with Azure Databricks , PySpark , Azure Data Factory , and SQL . You will work closely with senior data engineers and business stakeholders to build, optimize, and maintain data pipelines and infrastructure in a cloud-based environment. Strong knowledge and experience with the below Azure cloud platform Azure Databricks, Pyspark, Azure Data factory, SQL. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using Azure Data Factory and PySpark on Azure Databricks . Collaborate with cross-functional teams to gather and understand data requirements. Implement data transformations, cleansing, and aggregations using PySpark and SQL . Monitor and troubleshoot data workflows and ensure data integrity and availability. Assist in performance tuning of data pipelines and queries. Work with Azure-based data storage solutions such as Data Lake Storage and SQL Databases . Document data flows, pipeline architecture, and technical procedures. Stay updated with the latest Azure and data engineering tools and best practices. Required Skills: Hands-on experience or strong academic understanding of the Azure cloud platform , especially Azure Databricks , Azure Data Factory , and Azure Data Lake . Solid knowledge of PySpark and distributed data processing concepts. Strong proficiency in SQL and database fundamentals. Good understanding of ETL/ELT processes and data pipeline development. Basic understanding of DevOps principles and version control (e.g., Git) is a plus. Excellent analytical, problem-solving, and communication skills. Job Location : Remote Mode of Employment: Permanent to ADPMN (C2H) Experience : 4 Years to 8 years # of Positions:5 Mode: Remote Please free to reach on +91 95425 33666 / rajendrapv@adpmn.com if you need more information! Let me know your interest along with the following details: Full Name : Date of Birth : Total Experience as Data Engineer: Relevant Experience Azure Databricks , Azure Data Factory , and Azure Data Lake : Current Company : Current Pay roll Company If any: Current Location: Current CTC : Expected CTC : Notice Period:

Posted 4 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Location: Bangalore, Hyderabad, Chennai Notice Period: Immediate to 20 days Experience: 5+ years Relevant Experience: 5+ years Skills: Data Engineer, Azure, Python, Panda, SQL, Pyspark, SQL, Databricks, Data pipeline, Synapse

Posted 4 weeks ago

Apply

3.0 - 8.0 years

20 - 30 Lacs

Chennai

Hybrid

Naukri logo

Job Title: Senior Data Engineer Data Products Location: Chennai, India Open Roles: 2 Mode: Hybrid About the Role Are you a hands-on data engineer who thrives on solving complex data challenges and building modern cloud-native solutions? We're looking for two experienced Senior Data Engineers to join our growing Data Engineering team. This is an exciting opportunity to work on cutting-edge data platform initiatives that power advanced analytics, AI solutions, and digital transformation across a global enterprise. In this role, you'll help design and build reusable, scalable, and secure data pipelines on a multi-cloud infrastructure, while collaborating with cross-functional teams in a highly agile environment. What You’ll Do Design and build robust data pipelines and ETL frameworks using modern tools and cloud platforms. Implement lakehouse architecture (Bronze/Silver/Gold layers) and support data product publishing via Unity Catalog. Work with structured and unstructured enterprise data including ERP, CRM, and product data systems. Optimize pipeline performance, reliability, and security across AWS and Azure environments. Automate infrastructure using IaC tools like Terraform and AWS CDK. Collaborate closely with data scientists, analysts, and platform teams to deliver actionable data products. Participate in agile ceremonies, conduct code reviews, and contribute to team knowledge sharing. Ensure compliance with data privacy, cybersecurity, and governance policies. What You Bring 3+ years of hands-on experience in data engineering roles. Strong command of SQL and Python ; experience with Scala is a plus. Proficiency in cloud platforms (AWS, Azure), Databricks , DBT , Airflow , and version control tools like GitLab . Hands-on experience implementing lakehouse architectures and multi-hop data flows using Delta Lake . Background in working with enterprise data systems like SAP, Salesforce, and other business-critical platforms. Familiarity with DevOps , DataOps , and agile delivery methods (Jira, Confluence). Strong understanding of data security , privacy compliance , and production-grade pipeline management. Excellent communication skills and ability to work in global, multicultural teams. Why Join Us? Opportunity to work with modern data technologies in a complex, enterprise-scale environment. Be part of a collaborative, forward-thinking team that values innovation and continuous learning. Hybrid work model that offers both flexibility and team engagement . A role where you can make a real impact by contributing to digital transformation and data-driven decision-making.

Posted 4 weeks ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Scientist Reports to: Lead – Data Science About upl: UPL is focused on emerging as a premier global provider of total crop solutions designed to secure the world’s long-term food supply. Winning farmers hearts across the globe, while leading the way with innovative products and services that make agriculture sustainable , UPL is the fastest growing company in the industry. UPL has a rich history of 50+ years with presence in 120+ countries . Based on the recognition that humankind is one community, UPL’s overarching commitment is to improve areas of its presence, workplace, and customer engagement . Our purpose is ‘ OpenAg’. An Open agriculture network that feeds sustainable growth for all. No limits, no borders . In order to create sustainable food chain, UPL is now working to build a future-ready, analytics-driven organization that will be even more efficient, more innovative, and more agile. We are setting up “Digital and Analytics” CoE to work on some disruptive projects that will have an impact that matters for the planet. It will help us reimagine our business to ensure the best outcomes for growers, consumers, employees, and our planet. Work with us to get exposure to cutting-edge solutions in digital & advanced analytics, mentorship from senior leaders & domain experts, and access to a great work environment . JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation LOCATION: Bangalore What’s in it for you ? Disruptive projects : Work on ‘ breakthrough’ digital-and-analytics projects to enable UPL’s vision of building a future ready organization. It involves deploy ing solutions to help us increase our sales, sustain our profitability, improve our speed to market, supercharge our R&D efforts, and support the way we work internally. Help us ensure we have access to the best business insights that our data analysis can offer us. Cross functional leadership exposure : Work directly under guidance of functional leadership at UPL, on the most critical business problems for the organization (and the industry) today. It will give you exposure to a large cross-functional team (e.g.: spanning manufacturing, procurement, commercial, quality, IT/OT experts), allowing multi-functional learning in D&A deployment Environment fostering professional and personal development : Strengthen professional learning in a highly impact-oriented and meritocratic environment that is focused on delivering disproportionate business value through innovative solutions. It will be supported by on-the-job coaching from experienced domain experts, and continuous feedback from a highly motivated and capable set of peers. Comprehensive training programs for continuous development through UPL's D&A academy will help in accelerating growth opportunities. Come join us in this transformational journey! Let’s collectively Change the game with Digital & Analytics!

Posted 4 weeks ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Scientist About upl: UPL is focused on emerging as a premier global provider of total crop solutions designed to secure the world’s long-term food supply. Winning farmers hearts across the globe, while leading the way with innovative products and services that make agriculture sustainable , UPL is the fastest growing company in the industry. UPL has a rich history of 50+ years with presence in 120+ countries . Based on the recognition that humankind is one community, UPL’s overarching commitment is to improve areas of its presence, workplace, and customer engagement . Our purpose is ‘ OpenAg’. An Open agriculture network that feeds sustainable growth for all. No limits, no borders . In order to create sustainable food chain, UPL is now working to build a future-ready, analytics-driven organization that will be even more efficient, more innovative, and more agile. We are setting up “Digital and Analytics” CoE to work on some disruptive projects that will have an impact that matters for the planet. It will help us reimagine our business to ensure the best outcomes for growers, consumers, employees, and our planet. Work with us to get exposure to cutting-edge solutions in digital & advanced analytics, mentorship from senior leaders & domain experts, and access to a great work environment . JOb Responsibilities: The Data Scientists will leverage expertise in advanced statistical and modelling techniques to design, prototype, and build the next-generation analytics engines and services. They will work closely with the analytics teams and business teams to derive actionable insights, helping the organization in achieving its’ strategic goals. Their work will involve high levels of interaction with the integrated analytics team, including data engineers, translators, and more senior data scientists. Has expertise in implementing complex statistical analyses for data processing, exploration, model building and implementation Lead teams of 2-3 associate data scientists in the use case building and delivery process Can communicate complex technical concepts to both technical and non-technical audience Plays a key role in driving ideation around the modelling process and developing models. Can conceptualize and drive re-iteration and fine tuning of models Contribute to knowledge building and sharing by researching best practices, documenting solutions, and continuously iterating on new ways to solve problems. Mentors junior team members to do the same REQUIRED EDUCATION AND EXPERIENCE: Master’s degree in Computer Science, Statistics, Math, Operations Research, Economics, or a related field Advanced level programming skills in at least 1 coding language (R/Python/Scala) Practical experience of developing advanced statistical and machine learning models At least 2 years of relevant analytics experience Experience in using large database systems preferred Has developed niche expertise in at least one functional domain REQUIRED SKILLS: Ability to work well in agile environments in diverse teams with multiple stakeholders Experience of leading small teams Able to problem solve complex problems and break them down into simpler parts Ability to effectively communicate complex analytical and technical content High energy and passionate individual who can work closely with other team members Strong entrepreneurial drive to test new out of the box techniques Able to prioritize workstreams and adopt an agile approach Willing to adopt an iterative approach; experimental mindset to drive innovation LOCATION: Bangalore What’s in it for you ? Disruptive projects : Work on ‘ breakthrough’ digital-and-analytics projects to enable UPL’s vision of building a future ready organization. It involves deploy ing solutions to help us increase our sales, sustain our profitability, improve our speed to market, supercharge our R&D efforts, and support the way we work internally. Help us ensure we have access to the best business insights that our data analysis can offer us. Cross functional leadership exposure : Work directly under guidance of functional leadership at UPL, on the most critical business problems for the organization (and the industry) today. It will give you exposure to a large cross-functional team (e.g.: spanning manufacturing, procurement, commercial, quality, IT/OT experts), allowing multi-functional learning in D&A deployment Environment fostering professional and personal development : Strengthen professional learning in a highly impact-oriented and meritocratic environment that is focused on delivering disproportionate business value through innovative solutions. It will be supported by on-the-job coaching from experienced domain experts, and continuous feedback from a highly motivated and capable set of peers. Comprehensive training programs for continuous development through UPL's D&A academy will help in accelerating growth opportunities. Come join us in this transformational journey! Let’s collectively Change the game with Digital & Analytics!

Posted 4 weeks ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

JD below Bachelors Degree preferred, or equivalent combination of education, training, and experience. 5+ years of professional experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) 3+ years of professional experience with Enterprise Domains Like HR, Finance, Supply Chain 6+ years of professional experience with more than one SQL and relational databases including expertise in Presto, Spark, and MySQL Professional experience designing and implementing real-time pipelines (Apache Kafka, or similar technologies) 5+ years of professional experience in custom ETL design, implementation, and maintenance 3+ years of professional experience with Data Modeling including expertise in Data Warehouse design and Dimensional Modeling 5+ years of professional experience working with cloud or on-premises Big Data/MPP analytics platform (Teradata, AWS Redshift, Google BigQuery, Azure Synapse Analytics, or similar) Experience with data quality and validation (using Apache Airflow) Experience with anomaly/outlier detection Experience with Data Science workflow (Jupyter Notebooks, Bento, or similar tools) Experience with Airflow or similar workflow management systems Experience querying massive datasets using Spark, Presto, Hive, or similar Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Salesforce, etc.) Experience in data visualizations using Power BI and Tableau. Proficiency in Python programming language and Python libraries, with a focus on data engineering and data science applications. Professional fluency in English required

Posted 4 weeks ago

Apply

2.0 - 4.0 years

12 - 15 Lacs

Navi Mumbai

Work from Office

Naukri logo

Defines, designs, develops, test software components/applications using Microsoft Azure- (Databricks, Data Factory, Data Lake Storage, Logic Apps, Azure Key Vaults, ADLS) Strong SQL skills, Structured & unstructured datasets, Data Modeling Required Candidate profile Must Have Databricks, Python, SQL, Pyspark Big Data Ecosystem Spark Ecosystem Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse) AWS Data Modelling, ETL Methodology.

Posted 4 weeks ago

Apply

7.0 - 12.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Azure Databricks Engineering Lead to design, develop, and optimize data pipelines using Azure Databricks. The ideal candidate will have deep expertise in data engineering, cloud-based data processing, and ETL workflows to support business intelligence and analytics initiatives. Primary Responsibilities Design, develop, and implement scalable data pipelines using Azure Databricks Develop PySpark-based data transformations and integrate structured and unstructured data from various sources Optimize Databricks clusters for performance, scalability, and cost-efficiency within the Azure ecosystem Monitor, troubleshoot, and resolve performance bottlenecks in Databricks workloads Manage orchestration and scheduling of end to end data pipeline using tool like Apache airflow, ADF scheduling, logic apps Effective collaboration with Architecture team in designing solutions and with product owners with validating the implementations Implementing best practices to enable data quality, monitoring, logging and alerting the failure scenarios and exception handling Documenting step by step process to trouble shoot the potential issues and deliver cost optimized cloud solutions Provide technical leadership, mentorship, and best practices for junior data engineers Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities. Required Qualifications Overall 7+ years of experience in IT industry and 6+ years of experience in data engineering with at least 3 years of hands-on experience in Azure Databricks Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git) Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning Solid expertise in the Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions Proficiency in PySpark, Python and SQL for data processing in Databricks Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing Solid knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks Proven excellent written and verbal communication skills Proven excellent problem-solving skills and ability to work independently Balance multiple and competing priorities and execute accordingly Proven highly self-motivated with excellent interpersonal and collaborative skills Ability to anticipate risks and obstacles and develop plans for mitigation Proven excellent documentation experience and skills Preferred Qualifications Azure certifications DP-203, AZ-304 etc. Experience on infrastructure as code, scheduling as code, and automating operational activities using Terraform scripts.

Posted 4 weeks ago

Apply

12.0 - 17.0 years

15 - 20 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Tech Architect AWS AI (Anthropic) Experience: - 12+ years of total IT experience, with a minimum of 8 years in AI/ML architecture and solution development. Strong hands-on expertise in designing and building GenAI solutions using AWS services such as Amazon Bedrock, SageMaker, and Anthropic Claude models. Role Overview:- The Tech Architect AWS AI (Anthropic) will be responsible for translating AI solution requirements into scalable and secure AWS-native architectures. This role combines architectural leadership with hands-on technical depth in GenAI model integration, data pipelines, and deployment using Amazon Bedrock and Claude models. The ideal candidate will bridge the gap between strategic AI vision and engineering execution while ensuring alignment with enterprise cloud and security standards. Key Responsibilities: - Design robust, scalable architectures for GenAI use cases using Amazon Bedrock and Anthropic Claude. Lead architectural decisions involving model orchestration, prompt optimization, RAG pipelines, and API integration. Define best practices for implementing AI workflows using SageMaker, Lambda, API Gateway, and Step Functions. Review and validate implementation approaches with tech leads and developers; ensure alignment with architecture blueprints. Contribute to client proposals, solution pitch decks, and technical sections of RFP/RFI responses. Ensure AI solutions meet enterprise requirements for security, privacy, compliance, and performance. Collaborate with cloud infrastructure, data engineering, and DevOps teams to ensure seamless deployment and monitoring. Stay updated on AWS Bedrock advancements, Claude model improvements, and best practices for GenAI governance. Required Skills and Competencies: - Deep hands-on experience with Amazon Bedrock, Claude (Anthropic), Amazon Titan, and embedding-based workflows. Proficient in Python and cloud-native API development; experienced with JSON, RESTful integrations, and serverless orchestration. Strong understanding of SageMaker (model training, tuning, pipelines), real-time inference, and deployment strategies. Knowledge of RAG architectures, vector search (e.g., OpenSearch, Pinecone), and prompt engineering techniques. Expertise in IAM, encryption, access control, and responsible AI principles for secure AI deployments. Ability to create and communicate high-quality architectural diagrams and technical documentation. Desirable Qualifications: AWS Certified Machine Learning Specialty and/or AWS Certified Solutions Architect Professional. Familiarity with LangChain, Haystack, Semantic Kernel in AWS context. Experience with enterprise-grade GenAI use cases such as intelligent search, document summarization, conversational AI, and code copilots. Exposure to integrating third-party model APIs and services available via AWS Marketplace. Soft Skills: Strong articulation and technical storytelling capabilities for client and executive conversations. Proven leadership in cross-functional project environments with globally distributed teams. Analytical mindset with a focus on delivering reliable, maintainable, and performant AI solutions. Self-driven, curious, and continuously exploring innovations in GenAI and AWS services. Our Offering: Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.

Posted 4 weeks ago

Apply

3.0 - 5.0 years

16 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

KPMG India is looking for Azure Data Engineer - Consultant to join our dynamic team and embark on a rewarding career journey Assure that data is cleansed, mapped, transformed, and otherwise optimised for storage and use according to business and technical requirements Solution design using Microsoft Azure services and other tools The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.) Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications Build data pipelines to collectively bring together data Other responsibilities include extracting data, troubleshooting and maintaining the data warehouse

Posted 4 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Noida, Hyderabad

Work from Office

Naukri logo

Primary Responsibilities Support the full data engineering lifecycle including research, proof of concepts, design, development, testing, deployment, and maintenance of data management solutions Utilize knowledge of various data management technologies to drive data engineering projects Lead data acquisition efforts to gather data from various structured or semi-structured source systems of record to hydrate client data warehouse and power analytics across numerous health care domains Leverage combination of ETL/ELT methodologies to pull complex relational and dimensional data to support loading DataMarts and reporting aggregates Eliminate unwarranted complexity and unneeded interdependencies Detect data quality issues, identify root causes, implement fixes, and manage data audits to mitigate data challenges Implement, modify, and maintain data integration efforts that improve data efficiency, reliability, and value Leverage and facilitate the evolution of best practices for data acquisition, transformation, storage, and aggregation that solve current challenges and reduce the risk of future challenges Effectively create data transformations that address business requirements and other constraints Partner with the broader analytics organization to make recommendations for changes to data systems and the architecture of data platforms Support the implementation of a modern data framework that facilitates business intelligence reporting and advanced analytics Prepare high level design documents and detailed technical design documents with best practices to enable efficient data ingestion, transformation and data movement Leverage DevOps tools to enable code versioning and code deployment Leverage data pipeline monitoring tools to detect data integrity issues before they result into user visible outages or data quality issues Leverage processes and diagnostics tools to troubleshoot, maintain and optimize solutions and respond to customer and production issues Continuously support technical debt reduction, process transformation, and overall optimization Leverage and contribute to the evolution of standards for high quality documentation of data definitions, transformations, and processes to ensure data transparency, governance, and security Ensure that all solutions meet the business needs and requirements for security, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of experience in creating Source to Target Mappings and ETL design for integration of new/modified data streams into the data warehouse/data marts. Minimum of 2+ years of experience with Cerner Millennium / HealthEintent and experience using Cerner CCL 2+ years of experience working with Health Catalyst product offerings, including data warehousing solutions, knowledgebase, and analytics solutions Epic certifications in one or more of the following modules: Caboodle, EpicCare, Grand Central, Healthy Planet, HIM, Prelude, Resolute, Tapestry, or Reporting Workbench Experience in Unix or Powershell or other batch scripting languages Depth of experience and proven track record creating and maintaining sophisticated data frameworks for healthcare organizations Experience supporting data pipelines that power analytical content within common reporting and business intelligence platforms (e.g. Power BI, Qlik, Tableau, MicroStrategy, etc.) Experience supporting analytical capabilities inclusive of reporting, dashboards, extracts, BI tools, analytical web applications and other similar products Experience contributing to cross-functional efforts with proven success in creating healthcare insights Experience and credibility interacting with analytics and technology leadership teams Exposure to Azure, AWS, or google cloud ecosystems Exposure to Amazon Redshift, Amazon S3, Hadoop HDFS, Azure Blob, or similar big data storage and management components Desire to continuously learn and seek new options and approaches to business challenges A willingness to leverage best practices, share knowledge, and improve the collective work of the team Ability to effectively communicate concepts verbally and in writing Willingness to support limited travel up to 10%

Posted 4 weeks ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

What youll be doing We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential . We are building data products assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners , coaches , industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a subject matter expert of solutions & platforms, you will be responsible for providing technical leadership to various projects on the data platform team. You are expected to have depth of knowledge on specified technological areas, which includes knowledge of applicable processes, methodologies, standards, products and frameworks. Driving the technical design of large scale data platforms, utilizing modern and open source technologies, in a hybrid cloud environment Setting standards for data engineering functions; design templates for the data management program which are scalable, repeatable, and simple. Building strong multi-functional relationships and getting recognized as a data and analytics subject matter expert among other teams. Collaborating across teams to settle appropriate data sources, develop data extraction and business rule solutions. Sharing and incorporating best practices from the industry using new and upcoming tools and technologies in data management & analytics. Organizing, planning and developing solutions to sophisticated data management problem statements. Defining and documenting architecture, capturing and documenting non - functional (architectural) requirements, preparing estimates and defining technical solutions to proposals (RFPs). Designing & Developing reusable and scalable data models to suit business deliverables Designing & Developing data pipelines. Providing technical leadership to the project team to perform design to deployment related activities, provide guidance, perform reviews, prevent and resolve technical issues. Collaborating with the engineering, DevOps & admin team to ensure alignment to efficient design practices, and fix issues in dev, test and production environments from infrastructure is highly available and performing as expected. Designing, implementing, and deploying high-performance, custom solutions. Where you'll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What were looking for... You are curious and passionate about Data and truly believe in the high impact it can create for the business. People count on you for your expertise in data management in all phases of the software development cycle. You enjoy the challenge of solving complex data management problems and challenging priorities in a multifaceted, complex and deadline-oriented environment. Building effective working relationships and collaborating with other technical teams across the organization comes naturally to you. You'll need to have Six or more years of relevant experience. Knowledge of Information Systems and their applications to data management processes. Experience performing detailed analysis of business problems and technical environments and designing the solution. Experience working with Google Cloud Platform & BigQuery. Experience working with Bigdata Technologies & Utilities - Hadoop/Spark/Scala/Kafka/NiFi. Experience with relational SQL and NoSQL databases. Experience with data pipeline and workflow management & Governance tools. Experience with stream-processing systems. Experience with object-oriented/object function scripting languages. Experience building data solutions for Machine learning and Artificial Intelligence. Knowledge of Data Analytics and modeling tools. Even better if you have Masters degree in Computer Science or a related field Experience on Frontend/Web technologies; React JS, CSS, HTML Experience in and backend services; Java Spring Boot, Node JS Experience working with data and Visualization products. Certifications in any Data Warehousing/Analytics solutioning Certifications in GCP Ability to clearly articulate the pros and cons of various technologies and platforms Experience collaborating with multi-functional teams and managing partner expectations Written and verbal communication skills Ability to work in a fast-paced agile development environment

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies