Home
Jobs

1885 Data Engineering Jobs - Page 21

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

4 - 7 Lacs

Ahmedabad

Work from Office

Naukri logo

Roles and Responsibility : Collaborate with stakeholders to understand business requirements and data needs. Translate business requirements into scalable and efficient data engineering solutions. Design, develop, and maintain data pipelines using AWS serverless technologies. Implement data modeling techniques to optimize data storage and retrieval processes. Develop and deploy data processing and transformation frameworks for real-time and batch processing. Ensure data pipelines are scalable, reliable, and performant for large-scale data sizes. Implement data documentation and observability tools and practices to monitor...

Posted 1 week ago

Apply

6.0 - 8.0 years

15 - 22 Lacs

Mumbai

Work from Office

Naukri logo

Strong Python programming skills with expertise in Pandas, Lxml, ElementTree, File I/O operations, Smtplib, and Logging libraries Basic understanding of XML structures and ability to extract key parent & child xml tag elements from XML data structure Required Candidate profile Java Spring Boot API Microservices - 8+ years of exp + SQL 5+ years of exp + Azure 3+ years of experience

Posted 1 week ago

Apply

3.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.

Posted 1 week ago

Apply

2.0 - 5.0 years

6 - 13 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Software Developer II: Oracle Data Integrator (ODI) Bangalore, Hyderabad, Chennai, Mumbai, Pune, Gurgaon, Kolkata LOCATION EXPERIENCE 2-5 years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. So, what impact will you make? Visit us @ https://hashedin.com JOB TITLE: Software Developer II: Oracle Data Integrator (ODI) OVERVIEW OF THE ROLE: We are looking for an experienced Oracle Data Integrator (ODI) and Oracle Analytics Cloud (OAC) Consultant to join our dynamic team. You will be responsible for designing, implementing, and optimizing cutting-edge data integration and analytics solutions. Your contributions will be pivotal in enhancing data-driven decision-making and delivering actionable insights across the organization. HASHEDIN BY DELOITTE 2025 Key Responsibilities: ¢ Develop robust data integration solutions using Oracle Data Integrator (ODI). Create, optimize, and maintain ETL/ELT workflows and processes. Configure and manage Oracle Analytics Cloud (OAC) to provide interactive dashboards and advanced analytics. ¢ ¢ Integrate and transform data from various sources to generate meaningful insights using OAC. Monitor and troubleshoot data pipelines and analytics solutions to ensure optimal performance. ¢ ¢ ¢ Ensure data quality, accuracy, and integrity across integration and reporting systems. Provide training and support to end-users for OAC and ODI solutions. Analyze, design develop, fix and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Technical Skills: ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ Expertise in ODI components such as Topology, Designer, Operator, and Agent. Experience in Java and weblogic development. Proficiency in developing OAC dashboards, reports, and KPIs. Strong knowledge of SQL and PL/SQL for advanced data manipulation. Familiarity with Oracle databases and Oracle Cloud Infrastructure (OCI). Experience in data modeling and designing data warehouses. Strong analytical and problem-solving abilities. Excellent communication and client-facing skills. Hands-on, end to end DWH Implementation experience using ODI. Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization, workflow design, etc. ¢ ¢ ¢ ¢ Expertise in the Oracle ODI tool set and Oracle PL/SQL,ODI knowledge of ODI Master and work repository Knowledge of data modelling and ETL design Setting up topology, building objects in Designer, Monitoring Operator, different type of KMs, Agents etc ¢ ¢ Packaging components, database operations like Aggregate pivot, union etc. using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ Design and develop complex mappings, Process Flows and ETL scripts Experience of performance tuning of mappings Ability to design ETL unit test cases and debug ETL Mappings Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Experience on Error recycling / management using ODI,PL/SQL Expertise in database development (SQL/ PLSQL) for PL/SQL based applications. © HASHEDIN BY DELOITTE 2025 ¢ Experience of creating PL/SQL packages, procedures, Functions , Triggers, views, Mat Views and exception handling for retrieving, manipulating, checking and migrating complex datasets in oracle ¢ ¢ ¢ ¢ ¢ Experience in Data Migration using SQL loader, import/export Experience in SQL tuning and optimization using explain plan and SQL trace files. Strong knowledge of ELT/ETL concepts, design and coding Partitioning and Indexing strategy for optimal performance. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High and Low level design documents. ¢ Ability to work with minimal guidance or supervision in a time critical environment. Experience: ¢ ¢ ¢ 4-6 Years of overall experience in Industry 3+ years of experience with Oracle Data Integrator (ODI) in data integration projects. 2+ years of hands-on experience with Oracle Analytics Cloud (OAC). Preferred Skills: ¢ Knowledge of Oracle Autonomous Data Warehouse (ADW) and Oracle Integration Cloud (OIC). ¢ ¢ ¢ Familiarity with other analytics tools like Tableau or Power BI. Experience with scripting languages such as Python or shell scripting. Understanding of data governance and security best practices. Educational Qualifications: ¢ Bachelors degree in Computer Science, Information Technology, Engineering, or related field. © HASHEDIN BY DELOITTE 2025

Posted 1 week ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather data requirements and translate them into technical specs. Implement CI/CD pipelines for seamless data deployment using Azure DevOps. Monitor data quality, performance bottlenecks, and scalability issues. Write clean, organized, reusable PySpark code in an Agile environment. Document pipelines, architectures, and best practices for reuse. #MustHaveSkills Experience: 6+ years in Data Engineering Tech Stack: SQL, Python, PySpark, Spark, Azure Databricks, ADF, ADLS Gen2, Azure DevOps, Key Vaults Core Expertise: Data Warehousing, ETL, Data Pipelines, Data Modelling, Data Governance Agile, SDLC, Containerization (Docker), Clean coding practices #GoodToHaveSkills Event Hubs, Logic Apps Power BI Strong logic building and competitive programming background #ContractDetails Role: Senior Data Engineer Mode: Remote Duration: 6 Months Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Mumbai, Maharastra

Work from Office

Naukri logo

S&P Global Ratings is looking for a Java/Angular full stack solid engineering technologist/individual contributor to join Ingestion Pipelines Engineering team within Data Services group, a team of data and technology professionals who define and execute the strategic data roadmap for S&P Global Ratings. The successful candidate will participate in the design and build of S&P Ratings cutting edge Ingestion pipelines solutions. The Team : You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities and Impact : Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. What Were Looking For: Basic Required Qualifications : Bachelors degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Designing/developing enterprise products, modern tech stacks and data platforms 4+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 4+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Additional Preferred Qualifications : Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor

Posted 1 week ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Role: Data Engineer Experience: 6-8 Years Relevant Experience in Data Engineer : 6+ Years Notice Period: Immediate Joiners Only Job Location: Pune and Bangalore Key Responsibilities: Mandate Skills Strong - Pyspark (Programming) Databricks Technical and professional skills: We are looking for a flexible, fast learning, technically strong Data Engineer. Expertise is required in the following fields: Proficient in Cloud Services Azure Architect and implement ETL and data movement solutions. Design and implement data solutions using medallion architecture, ensuring effective organization and flow of data through bronze, silver, and gold layers. Optimize data storage and processing strategies to enhance performance and data accessibility across various stages of the medallion architecture. Collaborate with data engineers and analysts to define data access patterns and establish efficient data pipelines. Develop and oversee data flow strategies to ensure seamless data movement and transformation across different environments and stages of the data lifecycle. Migrate data from traditional database systems to Cloud environment Strong hands-on experience for working with Streaming dataset Building Complex Notebook in Databricks to achieve business Transformations. Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Reach us:If you are interested in this position and meet the above qualifications, please reach out to me directly at swati@cielhr.com and share your updated resume highlighting your relevant experience.

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 25 Lacs

Bengaluru

Remote

Naukri logo

Job Title: Data Engineer ETL & Spatial Data Expert Locations: Bengaluru / Gurugram / Nagpur / Remote Department: Data Engineering / GIS / ETL Experience: As per requirement (CTC capped at 3.5x of experience in years) Notice Period: Max 30 days Role Overview: We are looking for a detail-oriented and technically proficient Data Engineer with strong experience in FME, spatial data handling , and ETL pipelines . The role involves building, transforming, validating, and automating complex geospatial datasets and dashboards to support operational and analytical needs. Candidates will work closely with internal teams, local authorities (LA), and HMLR specs. Key Responsibilities: 1. Data Integration & Transformation Build ETL pipelines using FME to ingest and transform data from Idox/CCF systems. Create Custom Transformers in FME to apply reusable business rules. Use Python (standalone or within FME) for custom transformations, date parsing, and validations. Conduct data profiling to assess completeness, consistency, and accuracy. 2. Spatial Data Handling Manage and query spatial datasets using PostgreSQL/PostGIS . Handle spatial formats like GeoPackage, GML, GeoJSON, Shapefiles . Fix geometry issues like overlaps or invalid polygons using FME or SQL . Ensure proper coordinate system alignment (e.g., EPSG:27700). 3. Automation & Workflow Orchestration Use FME Server/FME Cloud to automate and monitor ETL workflows. Schedule batch processes via CI/CD, Cron, or Python . Implement audit trails and logs for all data processes and rule applications. 4. Dashboard & Reporting Integration Write SQL views and aggregations to support dashboard visualizations. Optionally integrate with Power BI, Grafana, or Superset . Maintain metadata tagging for each data batch. 5. Collaboration & Communication Interpret validation reports and collaborate with Analysts/Ops teams. Translate business rules into FME logic or SQL queries. Map data to LA/HMLR schemas accurately. Preferred Tools & Technologies: CategoryToolsETLFME (Safe Software), Talend (optional), PythonSpatial DBPostGIS, Oracle SpatialGIS ToolsQGIS, ArcGISScriptingPython, SQLValidationFME Testers, AttributeValidator, SQL viewsFormatsCSV, JSON, GPKG, XML, ShapefilesCollaborationJira, Confluence, Git Ideal Candidate Profile: Strong hands-on experience with FME workflows and spatial data transformation . Proficient in scripting using Python and working with PostGIS . Demonstrated ability to build scalable data automation pipelines. Effective communicator capable of converting requirements into technical logic. Past experience with LA or HMLR data specifications is a plus. Required Qualifications: B.E./B.Tech. (Computer Science, IT, or ECE) B.Sc. (IT/CS) or Full-time MCA Strict Screening Criteria: No employment gaps over 4 months. Do not consider candidates from Jawaharlal Nehru University. Exclude profiles from Hyderabad or Andhra Pradesh (education or employment). Reject profiles with BCA, B.Com, Diploma, or open university backgrounds. Projects must detail technical tools/skills used clearly. Max CTC is 3.5x of total years of experience. No flexibility on notice period or compensation. No candidates from Noida for Gurugram location.

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad, Telangana Role Overview: Accordion is looking for Senior Data Engineer with Database/Data Warehouse/Business Intelligence experience. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Senior Data Engineer should be able to understand various architecture and recommend right fit depending on the use case of the project. A successful Senior Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Understand the business requirements thoroughly to design and develop the BI architecture. Determine business intelligence and data warehousing solutions that meet business needs. Perform data warehouse design and modelling according to established standards. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Ensure to develop and deliver high quality reports in timely and accurate manner. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 2 - 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite). In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.). In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.). Good understanding of Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services), AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.

Posted 1 week ago

Apply

12.0 - 20.0 years

35 - 50 Lacs

Bengaluru

Hybrid

Naukri logo

Data Architect with Cloud Expert, Data Architecture, Data Integration & Data Engineering ETL/ELT - Talend, Informatica, Apache NiFi. Big Data - Hadoop, Spark Cloud platforms (AWS, Azure, GCP), Redshift, BigQuery Python, SQL, Scala,, GDPR, CCPA

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

About the Role Were seeking a Data Engineering expert with a passion for teaching and building impactful learning experiences. This role goes beyond traditional instructionit's about designing engaging, industry-relevant content and delivering it in a way that sparks curiosity and problem-solving among young professionals. If youre someone who thrives in a startup-like, hands-on learning environment and loves to simplify complex technical concepts, we want you on our team. Qualification: B.E / M.Tech in Computer Science, Data Engineering, or related fields Key Skills & Expertise Strong practical experience with data engineering tools and frameworks (e.g., SQL, Python, Spark, Kafka, Airflow, Hadoop). Ability to design course modules that emphasize application, scalability, and problem-solving. Demonstrated experience in mentoring, teaching, or conducting technical workshops. Passion for product thinkingguiding students to go beyond code and build real solutions. Excellent communication and leadership skills. Adaptability and a growth mindset. Your Responsibilities at Inunity Design and deliver an industry-relevant Data Engineering curriculum with a focus on solving complex, real-world problems. Mentor students through the process of building product-grade data solutions, from identifying the problem to deploying a prototype. Conduct hands-on sessions, coding labs, and data engineering workshops. Assess student progress through assignments, evaluations, and project reviews. Encourage innovation and entrepreneurship by helping students transform ideas into structured products. Continuously improve content based on student outcomes and industry trends. Be a role model who inspires, supports, and challenges learners to grow into capable tech professionals. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India.

Posted 1 week ago

Apply

5.0 - 9.0 years

4 - 7 Lacs

Gurugram

Work from Office

Naukri logo

Primary Skills SQL (Advanced Level) SSAS (SQL Server Analysis Services) Multidimensional and/or Tabular Model MDX / DAX (strong querying capabilities) Data Modeling (Star Schema, Snowflake Schema) Secondary Skills ETL processes (SSIS or similar tools) Power BI / Reporting tools Azure Data Services (optional but a plus) Role & Responsibilities Design, develop, and deploy SSAS models (both tabular and multidimensional). Write and optimize MDX/DAX queries for complex business logic. Work closely with business analysts and stakeholders to translate requirements into robust data models. Design and implement ETL pipelines for data integration. Build reporting datasets and support BI teams in developing insightful dashboards (Power BI preferred). Optimize existing cubes and data models for performance and scalability. Ensure data quality, consistency, and governance standards. Top Skill Set SSAS (Tabular + Multidimensional modeling) Strong MDX and/or DAX query writing SQL Advanced level for data extraction and transformations Data Modeling concepts (Fact/Dimension, Slowly Changing Dimensions, etc.) ETL Tools (SSIS preferred) Power BI or similar BI tools Understanding of OLAP & OLTP concepts Performance Tuning (SSAS/SQL) Skills: analytical skills,etl processes (ssis or similar tools),collaboration,multidimensional expressions (mdx),power bi / reporting tools,sql (advanced level),sql proficiency,dax,ssas (multidimensional and tabular model),etl,data modeling (star schema, snowflake schema),communication,azure data services,mdx,data modeling,ssas,data visualization

Posted 1 week ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Pune

Work from Office

Naukri logo

Diacto is looking for a highly capable Data Architect with 5 to 9 years of experience to lead cloud data platform initiatives with a primary focus on Snowflake and Azure Data Hub. This individual will play a key role in defining the data architecture strategy, implementing robust data pipelines, and enabling enterprise-grade analytics solutions. This is an on-site role based in our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design and implement enterprise-level data architecture with a strong focus on Snowflake and Azure Data Hub Define standards and best practices for data ingestion, transformation, and storage Collaborate with cross-functional teams to develop scalable, secure, and high-performance data pipelines Lead Snowflake environment setup, configuration, performance tuning, and optimization Integrate Azure Data Services with Snowflake to support diverse business use cases Implement governance, metadata management, and security policies Mentor junior developers and data engineers on cloud data technologies and best practices Experience and Skills Required: 5?9 years of overall experience in data architecture or data engineering roles Strong, hands-on expertise in Snowflake, including design, development, and performance tuning Solid experience with Azure Data Hub and Azure Data Services (Data Lake, Synapse, etc.) Understanding of cloud data integration techniques and ELT/ETL frameworks Familiarity with data orchestration tools such as DBT, Airflow, or Azure Data Factory Proven ability to handle structured, semi-structured, and unstructured data Strong analytical, problem-solving, and communication skills Nice to Have: Certifications in Snowflake and/or Microsoft Azure Experience with CI/CD tools like GitHub for code versioning and deployment Familiarity with real-time or near-real-time data ingestio Why Join Diacto Technologies Work with a cutting-edge tech stack and cloud-native architectures Be part of a data-driven culture with opportunities for continuous learning Collaborate with industry experts and build transformative data solutions

Posted 1 week ago

Apply

7.0 - 10.0 years

17 - 32 Lacs

Bengaluru

Remote

Naukri logo

Remote - Expertise in Snowflake development &coding Azure Data Factory (ADF) Knowledge of CI/CD Proficient in ETL design across platforms like Denodo, Data Services &CPI-DS React, Node.js, &REST API integration Solid understanding of cloud platform

Posted 1 week ago

Apply

1.0 - 3.0 years

1 - 2 Lacs

Nagercoil

Work from Office

Naukri logo

Job Overview: We are looking for a skilled Python and Data Science Programmer to develop and implement data-driven solutions. The ideal candidate should have strong expertise in Python, machine learning, data analysis, and statistical modeling. Key Responsibilities: Data Analysis & Processing: Collect, clean, and preprocess large datasets for analysis. Machine Learning: Build, train, and optimize machine learning models for predictive analytics. Algorithm Development: Implement data science algorithms and statistical models for problem-solving. Automation & Scripting: Develop Python scripts and automation tools for data processing and reporting. Data Visualization: Create dashboards and visual reports using Matplotlib, Seaborn, Plotly, or Power BI/Tableau. Database Management: Work with SQL and NoSQL databases for data retrieval and storage. Collaboration: Work with cross-functional teams, including data engineers, business analysts, and software developers. Research & Innovation: Stay updated with the latest trends in AI, ML, and data science to improve existing models.

Posted 1 week ago

Apply

13.0 - 18.0 years

45 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title- Name List Screening and Transaction Screening Model Strats, AS Role Description Group Strategic Analytics (GSA) is part of Group Chief Operation Office (COO) which acts as the bridge between the Banks business and infrastructure functions to help deliver the efficiency, control, and transformation goals of the Bank. You will work within the Global Strategic Analytics Team as part of a global model strategy and deployment of Name List Screening and Transaction Screening. To be successful in that role, you will be familiar with the most recent data science methodologies and have a delivery-centric attitude, strong analytical skills, and a detail-oriented approach to breaking down complex matters into more understandable details. The purpose of Name List Screening and Transaction Screening is to identify and investigate unusual customer names and transactions and behavior, to understand if that activity is considered suspicious from a financial crime perspective, and to report that activity to the government. You will be responsible for helping to implement and maintain the models for Name List Screening and Transaction Screening to ensure that all relevant criminal risks, typologies, products, and services are properly monitored. We are looking for a high-performing Associate in financial crime model development, tuning, and analytics to support the global strategy for screening systems across Name List Screening (NLS) and Transaction Screening (TS). This role offers the opportunity to work on key model initiatives within a cross-regional team and contribute directly to the banks risk mitigation efforts against financial crime. You will support model tuning and development efforts, support regulatory deliverables, and help collaborate with cross-functional teams including Compliance, Data Engineering, and Technology. Your key responsibilities Support the design and implementation of the model framework for name and transaction screening including coverage, data, model development and optimisation. Support key data initiatives, including but not limited to, data lineage, data quality controls, and data quality issues management. Document model logic and liaise with Compliance and Model Risk Management teams to ensure screening systems and scenarios adhere to all model governance standards Participate in research projects on innovative solutions to make detection models more pro-active Assist in model testing, calibration and performance monitoring. Ensure detailed metrics & reporting are developed to provide transparency and maintain effectiveness of name and transaction screening models. Support all examinations and reviews performed by regulators, monitors, and internal audit Your skills and experience Advanced degree (Masters or PhD) in a quantitative discipline (Mathematics, Computer Science, Data Science, Physics or Statistics) 13 years experience in data analytics or model development (internships included). Proficiency in designing, implementing (python, spark, cloud environments) and deploying quantitative models in a large financial institution, preferably in Front Office. Hands-on approach needed. Experience utilizing Machine Learning and Artificial Intelligence Experience with data and the ability to clearly articulate data requirements as they relate to NLS and TS, including comprehensiveness, quality, accuracy and integrity Knowledge of the banks products and services, including those related to corporate banking, investment banking, private banking, and asset management

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Key Skills & Responsibilities Hands-on experience with AWS services: S3, Lambda, Glue, API Gateway, and SQS. Strong data engineering expertise on AWS, with proficiency in Python, PySpark, and SQL. Experience in batch job scheduling and managing data dependencies across pipelines. Familiarity with data processing tools such as Apache Spark and Airflow. Ability to automate repetitive tasks and build reusable frameworks for improved efficiency. Provide RunOps DevOps support, and manage the ongoing operation and monitoring of data services. Ensure high performance, scalability, and reliability of data workflows in cloud environments. Skills: aws,s3,glue,apache spark,lambda,airflow,sql,s3, lambda, glue, api gateway, and sqs,api gateway,pyspark,sqs,python,devops support

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake). Strong experience with Azure Data Factory, Azure SQL, and ADLS. Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts. Skills: azure synapse,data modeling,data engineering,azure,azure databricks,azure data lake storage (adls),ci/cd,etl,elt,data warehousing,sql,scala,git,azure data factory,python

Posted 1 week ago

Apply

2.0 - 6.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

JD Senior Data Engineer Job Location : Hyderabad We are looking for an experienced Data Engineer with 3+ years of expertise in Databricks, AWS, Scala, and Apache Spark to work with one of our leading Japanese automotive based Client in North America, where cutting-edge technology meets innovation. The ideal candidate will have a strong foundation in big data processing, cloud technologies, and performance tuning to enable efficient data management. You'll also collaborate with global teams, work with advanced cloud technologies, and contribute to a forward-thinking data ecosystem that powers the future of automotive engineering. Who Can Apply: Only candidates who can join immediately or within 1 week can apply. Ideal for those seeking technical growth and work on a global project with cutting-edge technologies. Best suited for professionals passionate about innovation and problem-solving Key Responsibilities: Architect, design, and implement scalable ETL pipelines for large data processing. Develop and optimize data solutions using Databricks, AWS, Scala, and Spark. Ensure high-performance data processing with distributed computing techniques. Implement best practices for data modeling, transformation, and governance. Work closely with cross-functional teams to improve data reliability and efficiency. Monitor and troubleshoot data pipelines for performance improvements. Required Skills & Qualifications: Excellent communication and ability to handle direct Client interactions. 2+ years of experience in Data Engineering. Expertise in Databricks, AWS, Scala, and Apache Spark. Strong knowledge of big data architecture, ETL processes, and cloud data solutions. Ability to write optimized and scalable Spark jobs for distributed computing.

Posted 1 week ago

Apply

8.0 - 12.0 years

11 - 18 Lacs

Faridabad

Remote

Naukri logo

We are seeking an experienced and highly skilled Senior Data Scientist to drive data-driven decision-making and innovation. In this role, you will leverage your expertise in advanced analytics, machine learning, and big data technologies to solve complex business challenges. You will be responsible for designing predictive models, building scalable data pipelines, and uncovering actionable insights from structured and unstructured datasets. Collaborating with cross-functional teams, your work will empower strategic decision-making and foster a data-driven culture across the organization. Role & responsibilities 1. Data Exploration and Analysis: Collect, clean, and preprocess large and complex datasets from diverse sources, including SQL databases, cloud platforms, and APIs. Perform exploratory data analysis (EDA) to identify trends, patterns, and relationships in data. Develop meaningful KPIs and metrics tailored to business objectives. 2. Advanced Modeling and Machine Learning: Design, implement, and optimize predictive and prescriptive models using statistical techniques and machine learning algorithms. Evaluate model performance and ensure scalability and reliability in production. Work with both structured and unstructured data for tasks such as text analysis, image processing, and recommendation systems. 3. Data Engineering and Automation: Build and optimize scalable ETL pipelines for data processing and feature engineering. Collaborate with data engineers to ensure seamless integration of data science solutions into production environments. Leverage cloud platforms (e.g., AWS, Azure, GCP) for scalable computation and storage. 4. Data Visualization and Storytelling: Communicate complex analytical findings effectively through intuitive visualizations and presentations. Create dashboards and visualizations using tools such as Power BI, Tableau, or Python libraries (e.g., Matplotlib, Seaborn, Plotly). Translate data insights into actionable recommendations for stakeholders. 5. Cross-functional Collaboration and Innovation: Partner with business units, product teams, and data engineers to define project objectives and deliver impactful solutions. Stay updated with emerging technologies and best practices in data science, machine learning, and AI. Contribute to fostering a data-centric culture within the organization by mentoring junior team members and promoting innovative approaches. Preferred candidate profile Proficiency in Python, R, or other data science programming languages. Strong knowledge of machine learning libraries and frameworks (e.g., Scikit-learn, Tensor Flow, PyTorch). Advanced SQL skills for querying and managing relational databases. Experience with big data technologies (e.g., Spark, Hadoop) and cloud platforms (AWS, Azure, GCP), preferably MS Azure. Familiarity with data visualization tools such as Power BI, Tableau, or equivalent, preferably MS Power BI. Analytical and Problem-solving Skills: Expertise in statistical modeling, hypothesis testing, and experiment design. Strong problem-solving skills to address business challenges through data-driven solutions. Ability to conceptualize and implement metrics/KPIs tailored to business needs. Soft Skills: Excellent communication skills to translate complex technical concepts into business insights. Collaborative mindset with the ability to work in cross-functional teams. Proactive and detail-oriented approach to project management and execution. Education and Experience: Bachelors or Masters degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. 8+ years of experience in data science, advanced analytics, or a similar field. Proven track record of deploying machine learning models in production environments. Perks & Benefits Best as per market standard. Work from home opportunity. 5 days working. Shift Timing 2PM-11PM IST (Flexible hours)

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Pune

Work from Office

Naukri logo

As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc) . Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Navi Mumbai

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Mumbai

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies