Home
Jobs

984 Data Bricks Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Join Kyndryl as a Data Architect where you will unlock the power of data to drive strategic decisions and shape the future of our business. As a key member of our team, you will harness your expertise in basic statistics, business fundamentals, and communication to uncover valuable insights and transform raw data into rigorous visualizations and compelling stories. In this role, you will have the opportunity to work closely with our customers as part of a top-notch team. You will dive deep into vast IT datasets, unraveling the mysteries hidden within, and discovering trends and patterns that will revolutionize our customers' understanding of their own landscapes. Armed with your advanced analytical skills, you will draw compelling conclusions and develop data-driven insights that will directly impact their decision-making processes. Your Role and Responsibilities: Data Architecture Design: Design scalable, secure, and high-performance data architectures, including data warehouses, data lakes, and BI solutions. Data Modeling: Develop and maintain complex data models (ER, star, and snowflake schemas) to support BI and analytics requirements. BI Strategy and Implementation: Lead the design and implementation of BI solutions using platforms like Power BI, Tableau, Qlik, and Looker. ETL/ELT Management: Architect efficient ETL/ELT pipelines for data transformation and integration across multiple data sources. Data Governance: Implement data quality, data lineage, and metadata management frameworks to ensure data reliability and compliance. Performance Optimization: Optimize data storage and retrieval processes for speed, scalability, and efficiency. Stakeholder Collaboration: Work closely with business and technical teams to define data requirements and deliver actionable insights. Cloud and Big Data: Utilize cloud-native tools like Azure Synapse, AWS Redshift, GCP BigQuery, and Databricks for large-scale data processing. Mentorship: Guide junior data engineers and BI developers on best practices and advanced techniques. Your unique ability to communicate and empathize with stakeholders will be invaluable. By understanding the business objectives and success criteria of each project, you will align your data analysis efforts seamlessly with our overarching goals. With your mastery of business valuation, decision-making, project scoping, and storytelling, you will transform data into meaningful narratives that drive real-world impact. At Kyndryl, we believe that data holds immense potential, and we are committed to helping you unlock that potential. You will have access to vast repositories of data, empowering you to delve deep to determine root causes of defects and variation. By gaining a comprehensive understanding of the data and its specific purpose, you will be at the forefront of driving innovation and making a difference. If you are ready to unleash your analytical ability, collaborate with industry experts, and shape the future of data-driven decision making, then join us as a Data Analyst at Kyndryl. Together, we will harness the power of data to redefine what is possible and create a future filled with limitless possibilities. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Education: Bachelor's or master’s in computer science, Data Science, or a related field. Experience: 8+ years in data architecture, BI, and analytics roles. BI Tools: Power BI, Tableau, Qlik, Looker, SAP Analytics Cloud. Data Modeling: ER, dimensional, star, and snowflake schemas. Cloud Platforms: Azure, AWS, GCP, Snowflake. Databases: SQL Server, Oracle, MySQL, NoSQL (MongoDB, DynamoDB). ETL Tools: Informatica, Talend, SSIS, Apache Nifi. Scripting: Python, R, SQL, DAX, MDX. Soft Skills: Strong communication, problem-solving, and leadership abilities. Knowledge of deployment patterns. Strong documentation, troubleshooting, and data profiling skills. Excellent analytical, conceptual, and problem-solving abilities. Ability to manage multiple priorities and swiftly adapt to changing demands. Preferred Skills and Experience Microsoft Certified: Azure Data Engineer Associate AWS Certified Data Analytics - Specialty Google Professional Data Engineer Tableau Desktop Certified Professional Power BI Data Analyst Associate Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

About KPI Partners. KPI Partners is a leading provider of data analytics solutions, dedicated to helping organizations transform data into actionable insights. Our innovative approach combines advanced technology with expert consulting, allowing businesses to leverage their data for improved performance and decision-making. Job Description. We are seeking a skilled and motivated Data Engineer with experience in Databricks to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data processing solutions that support our analytics initiatives. You will collaborate closely with data scientists, analysts, and other engineers to ensure the consistent flow of high-quality data across our platforms. Key skills: Python, Pyspark, Databricks, ETL, Cloud (AWS, Azure, or GCP) Key Responsibilities. - Develop, construct, test, and maintain data architectures (e.g., large-scale data processing systems) in Databricks. - Design and implement ETL (Extract, Transform, Load) processes to move and transform data from various sources to target systems. - Collaborate with data scientists and analysts to understand data requirements and design appropriate data models and structures. - Optimize data storage and retrieval for performance and efficiency. - Monitor and troubleshoot data pipelines to ensure reliability and performance. - Engage in data quality assessments, validation, and troubleshooting of data issues. - Stay current with emerging technologies and best practices in data engineering and analytics. Qualifications. - Bachelor's degree in Computer Science, Engineering, Information Technology, or related field. - Proven experience as a Data Engineer or similar role, with hands-on experience in Databricks. - Strong proficiency in SQL and programming languages such as Python or Scala. - Experience with cloud platforms (AWS, Azure, or GCP) and related technologies. - Familiarity with data warehousing concepts and data modeling techniques. - Knowledge of data integration tools and ETL frameworks. - Strong analytical and problem-solving skills. - Excellent communication and teamwork abilities. Why Join KPI Partners? - Be part of a forward-thinking team that values innovation and collaboration. - Opportunity to work on exciting projects across diverse industries. - Continuous learning and professional development opportunities. - Competitive salary and benefits package. - Flexible work environment with hybrid work options. If you are passionate about data engineering and excited about using Databricks to drive impactful insights, we would love to hear from you! KPI Partners is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

15 - 18 Lacs

Pune

Work from Office

Naukri logo

We are looking for a highly skilled Senior Databricks Developer to join our data engineering team. You will be responsible for building scalable and efficient data pipelines using Databricks, Apache Spark, Delta Lake, and cloud-native services (Azure/AWS/GCP). You will work closely with data architects, data scientists, and business stakeholders to deliver high-performance, production-grade solutions. Key Responsibilities : - Design, build, and maintain scalable and efficient data pipelines on Databricks using PySpark, Spark SQL, and optionally Scala. - Work with Databricks components including Workspace, Jobs, DLT (Delta Live Tables), Repos, and Unity Catalog. - Implement and optimize Delta Lake solutions aligned with Lakehouse and Medallion architecture best practices. - Collaborate with data architects, engineers, and business teams to understand requirements and deliver production-grade solutions. - Integrate CI/CD pipelines using tools such as Azure DevOps, GitHub Actions, or similar for Databricks deployments. - Ensure data quality, consistency, governance, and security by using tools like Unity Catalog or Azure Purview. - Use orchestration tools such as Apache Airflow, Azure Data Factory, or Databricks Workflows to schedule and monitor pipelines. - Apply strong SQL skills and data warehousing concepts in data modeling and transformation logic. - Communicate effectively with technical and non-technical stakeholders to translate business requirements into technical solutions. Required Skills and Qualifications : - Hands-on experience in data engineering, with specifically in Databricks. - Deep expertise in Databricks Workspace, Jobs, DLT, Repos, and Unity Catalog. - Strong programming skills in PySpark, Spark SQL; Scala experience is a plus. - Proficient in working with one or more cloud platforms : Azure, AWS, or GCP. - Experience with Delta Lake, Lakehouse architecture, and medallion architecture patterns. - Proficient in building CI/CD pipelines for Databricks using DevOps tools. - Familiarity with orchestration and ETL/ELT tools such as Airflow, ADF, or Databricks Workflows. - Strong understanding of data governance, metadata management, and lineage tracking. - Excellent analytical, communication, and stakeholder management skills.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

17 - 27 Lacs

Gurugram

Hybrid

Naukri logo

Primary Responsibilities: Design and develop applications and services running on Azure, with a strong emphasis on Azure Databricks, ensuring optimal performance, scalability, and security. Build and maintain data pipelines using Azure Databricks and other Azure data integration tools. Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets. Write extensive query in SQL and Snowflake Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance. Create, understand, and validate design and estimated effort for given module/task, and be able to justify it. Possess solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments. Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery. Maintain code quality by writing clean, maintainable, and testable code. Monitor performance and optimize resources to ensure cost-effectiveness and high availability. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Provide technical support and consultation for infrastructure questions. Help develop, manage, and monitor continuous integration and delivery systems. Take accountability and ownership of features and teamwork. Comply with the terms and conditions of the employment contract, company policies and procedures, and any directives. Required Qualifications: B.Tech/MCA (Minimum 16 years of formal education) Overall 7+ years of experience. Minimum of 3 years of experience in Azure (ADF), Databricks and DevOps. 5 years of experience in writing advanced leve l SQL. 2-3 years of experience in writing, reading, and debugging Spark, Scala, and Python code . 3 or more years of experience in architecting, designing, developing, and implementing cloud solutions on Azure. Proficiency in programming languages and scripting tools. Understanding of cloud data storage and database technologies such as SQL and NoSQL. Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts. Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform. Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks. Proven excellent communication, writing, and presentation skills. Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills. Preferred Qualifications: Knowledge of AI/ML or LLM (GenAI). Knowledge of US Healthcare domain and experience with healthcare data. Experience and skills with Snowflake.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Cloud Data Engineer The Cloud Data Engineer will be responsible for developing the data lake platform and all applications on Azure cloud. Proficiency in data engineering, data modeling, SQL, and Python programming is essential. The Data Engineer will provide design and development solutions for applications in the cloud. Essential Job Functions: Understand requirements and collaborate with the team to design and deliver projects. Design and implement data lake house projects within Azure. Develop application lifecycle utilizing Microsoft Azure technologies. Participate in design, planning, and necessary documentation. Engage in Agile ceremonies including daily standups, scrum, retrospectives, demos, and code reviews. Hands-on experience with Python/SQL development and Azure data pipelines. Collaborate with the team to develop and deliver cross-functional products. Key Skills: a. Data Engineering and SQL b. Python c. PySpark d. Azure Data Lake and ADF e. Databricks f. CI/CD g. Strong communication Other Responsibilities: Document and maintain project artifacts. Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. Complete training as required for Privacy, Code of Conduct, etc. Promptly report any known or suspected loss, theft, or unauthorized disclosure or use of PI to the General Counsel/Chief Compliance Officer or Chief Information Officer. Adhere to the company's compliance program. Safeguard the company's intellectual property, information, and assets. Other duties as assigned. Minimum Qualifications and Job Requirements: Bachelor's degree in Computer Science. 7 years of hands-on experience in designing and developing distributed data pipelines. 5 years of hands-on experience in Azure data service technologies. 5 years of hands-on experience in Python, SQL, Object-oriented programming, ETL, and unit testing. Experience with data integration with APIs, Web services, Queues. Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, Confluence. *Required: Azure data engineering associate and databricks data engineering certification

Posted 2 weeks ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Overview : The Full-stack Data Engineer is responsible for the delivery of a business need end-to-end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a full-stack engineer among others is Ownership & Accountability. In addition to Delivery, the full-stack engineer should have an automation first and continuous improvement mindset. He/She should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Full stack engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers - not institutionalized developers. Roles & Responsibilities: Minimize meetings to get requirements and have direct business interactions Write referenceable & modular code Design and architect the solution independently Be fluent in particular areas and have proficiency in many areas Have a passion to learn Take ownership and accountability Understands when to automate and when not to Have a desire to simplify Be entrepreneurial / business minded Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have a business impact Take risks and champion new ideas Qualifications Primary Skills: Hands on with Python / PySpark programming - 3yrs+ SQL exp - 2yr+ (NoSQL can also work, but should have SQL 1yrs at least) Exp working with Cloud Tech - 1yrs+ - Any (AWS preferred) DevOps practices Experience Desired: Experience with Git/SVN Experience with scripting (JavaScript, Python, R, Ruby, Perl, etc.) Experience being part of Agile teams - Scrum or Kanban. Airflow Data bricks / Cloud Certifications Additional Skills: Excellent troubleshooting skills Strong communication skills Fluent in BDD and TDD development methodologies Work in an agile CI/CD environment (Jenkins experience a plus) Knowledge and/or experience with Health care information domains is a plus

Posted 2 weeks ago

Apply

10.0 - 20.0 years

45 - 55 Lacs

Noida, Hyderabad, Gurugram

Work from Office

Naukri logo

Data Architect Telecom Domain To design comprehensive data architecture and technical solutions specifically for telecommunications industry challenges, leveraging TMforum frameworks and modern data platforms. To work closely with customers, and technology partners to deliver data solutions that address complex telecommunications business requirements including customer experience management, network optimization, revenue assurance, and digital transformation initiatives. Responsibilities: Design and articulate enterprise-scale telecom data architectures incorporating TMforum standards and frameworks, including SID (Shared Information/Data Model), TAM (Telecom Application Map), and eTOM (enhanced Telecom Operations Map) Develop comprehensive data models aligned with TMforum guidelines for telecommunications domains such as Customer, Product, Service, Resource, and Partner management Create data architectures that support telecom-specific use cases including customer journey analytics, network performance optimization, fraud detection, and revenue assurance Design solutions leveraging Microsoft Azure and Databricks for telecom data processing and analytics Conduct technical discovery sessions with telecom clients to understand their OSS/BSS architecture, network analytics needs, customer experience requirements, and digital transformation objectives Design and deliver proof of concepts (POCs) and technical demonstrations showcasing modern data platforms solving real-world telecommunications challenges Create comprehensive architectural diagrams and implementation roadmaps for telecom data ecosystems spanning cloud, on-premises, and hybrid environments Evaluate and recommend appropriate big data technologies, cloud platforms, and processing frameworks based on telecom-specific requirements and regulatory compliance needs. Design data governance frameworks compliant with telecom industry standards and regulatory requirements (GDPR, data localization, etc.) Stay current with the latest advancements in data technologies including cloud services, data processing frameworks, and AI/ML capabilities Contribute to the development of best practices, reference architectures, and reusable solution components for accelerating proposal development Qualifications: Bachelor's or Master's degree in Computer Science, Telecommunications Engineering, Data Science, or a related technical field 10+ years of experience in data architecture, data engineering, or solution architecture roles with at least 5 years in telecommunications industry Deep knowledge of TMforum frameworks including SID (Shared Information/Data Model), eTOM, TAM, and their practical implementation in telecom data architectures Demonstrated ability to estimate project efforts, resource requirements, and implementation timelines for complex telecom data initiatives Hands-on experience building data models and platforms aligned with TMforum standards and telecommunications business processes Strong understanding of telecom OSS/BSS systems, network management, customer experience management, and revenue management domains Hands-on experience with data platforms including Databricks, and Microsoft Azure in telecommunications contexts Experience with modern data processing frameworks such as Apache Kafka, Spark and Airflow for real-time telecom data streaming Proficiency in Azure cloud platform and its respective data services with an understanding of telecom-specific deployment requirements Knowledge of system monitoring and observability tools for telecommunications data infrastructure Experience implementing automated testing frameworks for telecom data platforms and pipelines Familiarity with telecom data integration patterns, ETL/ELT processes, and data governance practices specific to telecommunications Experience designing and implementing data lakes, data warehouses, and machine learning pipelines for telecom use cases Proficiency in programming languages commonly used in data processing (Python, Scala, SQL) with telecom domain applications Understanding of telecommunications regulatory requirements and data privacy compliance (GDPR, local data protection laws) Excellent communication and presentation skills with ability to explain complex technical concepts to telecom stakeholders Strong problem-solving skills and ability to think creatively to address telecommunications industry challenges Good to have TMforum certifications or telecommunications industry certifications Relevant data platform certifications such as Databricks, Azure Data Engineer are a plus Willingness to travel as required if you will all or most of the criteria contact bdm@intellisearchonline.net M 9341626895

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Experience Required: 3+ years Technical knowledge: AWS, Python, SQL, S3, EC2, Glue, Athena, Lambda, DynamoDB, RedShift, Step Functions, Cloud Formation, CI/CD Pipelines, Github, EMR, RDS,AWS Lake Formation, GitLab, Jenkins and AWS CodePipeline. Role Summary: As a Senior Data Engineer,with over 3 years of expertise in Python, PySpark, SQL to design, develop and optimize complex data pipelines, support data modeling, and contribute to the architecture that supports big data processing and analytics to cutting-edge cloud solutions that drive business growth. You will lead the design and implementation of scalable, high-performance data solutions on AWS and mentor junior team members.This role demands a deep understanding of AWS services, big data tools, and complex architectures to support large-scale data processing and advanced analytics. Key Responsibilities: Design and develop robust, scalable data pipelines using AWS services, Python, PySpark, and SQL that integrate seamlessly with the broader data and product ecosystem. Lead the migration of legacy data warehouses and data marts to AWS cloud-based data lake and data warehouse solutions. Optimize data processing and storage for performance and cost. Implement data security and compliance best practices, in collaboration with the IT security team. Build flexible and scalable systems to handle the growing demands of real-time analytics and big data processing. Work closely with data scientists and analysts to support their data needs and assist in building complex queries and data analysis pipelines. Collaborate with cross-functional teams to understand their data needs and translate them into technical requirements. Continuously evaluate new technologies and AWS services to enhance data capabilities and performance. Create and maintain comprehensive documentation of data pipelines, architectures, and workflows. Participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications. Present findings to executive leadership and recommend data-driven strategies for business growth. Communicate effectively with different levels of management to gather use cases/requirements and provide designs that cater to those stakeholders. Handle clients in multiple industries at the same time, balancing their unique needs. Provide mentoring and guidance to junior data engineers and team members. Requirements: 3+ years of experience in a data engineering role with a strong focus on AWS, Python, PySpark, Hive, and SQL. Proven experience in designing and delivering large-scale data warehousing and data processing solutions. Lead the design and implementation of complex, scalable data pipelines using AWS services such as S3, EC2, EMR, RDS, Redshift, Glue, Lambda, Athena, and AWS Lake Formation. Bachelor's or Masters degree in Computer Science, Engineering, or a related technical field. Deep knowledge of big data technologies and ETL tools, such as Apache Spark, PySpark, Hadoop, Kafka, and Spark Streaming. Implement data architecture patterns, including event-driven pipelines, Lambda architectures, and data lakes. Incorporate modern tools like Databricks, Airflow, and Terraform for orchestration and infrastructure as code. Implement CI/CD using GitLab, Jenkins, and AWS CodePipeline. Ensure data security, governance, and compliance by leveraging tools such as IAM, KMS, and AWS CloudTrail. Mentor junior engineers, fostering a culture of continuous learning and improvement. Excellent problem-solving and analytical skills, with a strategic mindset. Strong communication and leadership skills, with the ability to influence stakeholders at all levels. Ability to work independently as well as part of a team in a fast-paced environment. Advanced data visualization skills and the ability to present complex data in a clear and concise manner. Excellent communication skills, both written and verbal, to collaborate effectively across teams and levels. Preferred Skills: Experience with Databricks, Snowflake, and machine learning pipelines. Exposure to real-time data streaming technologies and architectures. Familiarity with containerization and serverless computing (Docker, Kubernetes, AWS Lambda).

Posted 2 weeks ago

Apply

3.0 - 6.0 years

10 - 18 Lacs

Chennai

Hybrid

Naukri logo

Role & responsibilities We are seeking a talented and detail-oriented Power BI Developer with strong skills in SQL and experience working with Azure Databricks . The ideal candidate will be responsible for transforming raw data into meaningful insights using business intelligence tools and data engineering practices. This role involves building dashboards, writing optimized queries, and working with large-scale data platforms to support business decision-making. Key Responsibilities: Design, develop, and maintain interactive Power BI dashboards and reports that provide actionable business insights Collaborate with stakeholders to gather requirements and translate them into scalable BI solutions Write and optimize complex SQL queries to extract, transform, and load data from various sources Work with Databricks for data processing, transformation, and integration Create and manage data models , measures , and DAX formulas within Power BI Implement data refresh schedules , manage access controls, and maintain report performance Ensure data accuracy and integrity across all reporting layers Participate in code reviews, testing, and deployment of BI solutions Document technical specifications, data flows, and report logic

Posted 2 weeks ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

Naukri logo

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com

Posted 2 weeks ago

Apply

2.0 - 3.0 years

6 - 7 Lacs

Pune

Work from Office

Naukri logo

Data Engineer Job Description : Jash Data Sciences: Letting Data Speak! Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come and join our high-energy team of passionate data people. Jash Data Sciences is the right place for you. We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India. We believe in continuous learning and evolving together. And we let the data speak! What will you be doing? You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics Create Data Pipelines to bring in data from various sources, with different formats, transform it, and finally load it to the target database. Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, and DataFlow. Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc. Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL. Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views. Write Python scripts for the orchestration of data pipelines Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution. What do we need from you? Strong Python coding skills with basic knowledge of algorithms/data structures and their application. Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines. Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes of scale data. A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas. Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins. Experience working with Big Data technologies like PySpark/ Hadoop A good team player with the ability to communicate with clarity Show us your git repo/ blog! Qualification 1-2 years of experience working on Data Engineering projects for Data Engineer I 2-5 years of experience working on Data Engineering projects for Data Engineer II 1-5 years of Hands-on Python programming experience Bachelors/Masters' degree in Computer Science is good to have Courses or Certifications in the area of Data Engineering will be given a higher preference. Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.

Posted 2 weeks ago

Apply

10.0 - 18.0 years

30 - 45 Lacs

Hyderabad

Remote

Naukri logo

About Syren Cloud Syren Cloud Technologies is a cutting-edge company specializing in supply chain solutions and data engineering. Their intelligent insights, powered by technologies like AI and NLP, empower organizations with real-time visibility and proactive decision-making. From control towers to agile inventory management, Syren unlocks unparalleled success in supply chain management. Role Summary An Azure Data Architect is responsible for designing, implementing, and maintaining the data infrastructure within an organization. They collaborate with both business and IT teams to understand stakeholders needs and unlock the full potential of data. They create conceptual and logical data models, analyze structural requirements, and ensure efficient database solutions. Job Responsibilities • Act as subject matter expert providing best-practice guidance on data lake and ETL architecture frameworks suitable for handling big data for unstructured and structured information • Drive business and Service Layer development with the customer by finding new opportunities based on expanding existing solutions creating new ones • Provide hands-on subject matter expertise to build and implement Azure-based Big Data solution • Research, evaluate, architect, and deploy new tools, frameworks, and patterns to build sustainable Big Data platforms for our clients • Facilitate and/or conduct requirements workshops • Responsible for collaborating on the prioritization of technical requirement • Collaborates with peer teams and vendors on the solution and delivery • Has overall accountability for project delivery • Works collaboratively with the Product Management, Data Management, and other Architects to deliver for the cloud data platform, Data as a service • Consults with clients to assess current problem states, define desired future states, define solution architecture and make solutions recommendations

Posted 2 weeks ago

Apply

4.0 - 7.0 years

13 - 17 Lacs

Pune

Work from Office

Naukri logo

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 weeks ago

Apply

2.0 - 4.0 years

8 - 12 Lacs

Mumbai

Work from Office

Naukri logo

The SAS to Databricks Migration Developer will be responsible for migrating existing SAS code, data processes, and workflows to the Databricks platform This role requires expertise in both SAS and Databricks, with a focus on converting SAS logic into scalable PySpark and Python code The developer will design, implement, and optimize data pipelines, ensuring seamless integration and functionality within the Databricks environment Collaboration with various teams is essential to understand data requirements and deliver solutions that meet business needs

Posted 2 weeks ago

Apply

3.0 - 6.0 years

0 - 3 Lacs

Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Job Title: Data Scientists Location: Navi Mumbai Duration: Fulltime Positions: Multiple We are looking for a highly skilled Data Scientists with deep expertise in time series forecasting, particularly in demand forecasting and customer lifecycle analytics (CLV). The ideal candidate will be proficient in Python or PySpark, have hands-on experience with tools like Prophet and ARIMA, and be comfortable working in Databricks environments. Familiarity with classic ML models and optimization techniques is a plus. Key Responsibilities • Develop, deploy, and maintain time series forecasting models (Prophet, ARIMA, etc.) for demand forecasting and customer behavior modeling. • Design and implement Customer Lifetime Value (CLV) models to drive customer retention and engagement strategies. • Process and analyze large datasets using PySpark or Python (Pandas). • Partner with cross-functional teams to identify business needs and translate them into data science solutions. • Leverage classic ML techniques (classification, regression) and boosting algorithms (e.g., XGBoost, LightGBM) to support broader analytics use cases. • Use Databricks for collaborative development, data pipelines, and model orchestration. • Apply optimization techniques where relevant to improve forecast accuracy and business decision-making. • Present actionable insights and communicate model results effectively to technical and non-technical stakeholders. Required Qualifications • Strong experience in Time Series Forecasting, with hands-on knowledge of Prophet, ARIMA, or equivalent Mandatory. • Proven track record in Demand Forecasting Highly Preferred. • Experience in modeling Customer Lifecycle Value (CLV) or similar customer analytics use cases – Highly Preferred. • Proficiency in Python (Pandas) or PySpark – Mandatory. • Experience with Databricks – Mandatory. • Solid foundation in statistics, predictive modeling, and machine learning

Posted 2 weeks ago

Apply

6.0 - 9.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: - Contribute and build an internal product library that is focused on solving business problems related to prediction & recommendation. Research unfamiliar methodologies, techniques to fine tune existing models in the product suite and, recommend better solutions and/or technologies. Improve features of the product to include newer machine learning algorithms in the likes of product recommendation, real time predictions, fraud detection, offer personalization etc Collaborate with client teams to on-board data, build models and score predictions. Participate in building automations and standalone applications around machine learning algorithms to enable a One Click solution to getting predictions and recommendations. Analyze large datasets, perform data wrangling operations, apply statistical treatments to filter and fine tune input data, engineer new features and eventually aid the process of building machine learning models. Run test cases to tune existing models for performance, check criteria and define thresholds for success by scaling the input data to multifold. Demonstrate a basic understanding of different machine learning concepts such as Regression, Classification, Matrix Factorization, K-fold Validations and different algorithms such as Decision Trees, Random Forrest, K-means clustering. Demonstrate working knowledge and contribute to building models using deep learning techniques, ensuring robust, scalable and high-performance solutions Minimum Qualifications: Education: Master's or PhD in a quantitative discipline (Statistics, Economics, Mathematics, Computer Science) is highly preferred. Deep Learning Mastery: Extensive experience with deep learning frameworks (TensorFlow, PyTorch, or Keras) and advanced deep learning projects across various domains, with a focus on multimodal data applications. Generative AI Expertise: Proven experience with generative AI models and techniques, such as RAG, VAEs, Transformers, and applications at scale in content creation or data augmentation. Programming and Big Data: Expert-level proficiency in Python and big data/cloud technologies (Databricks and Spark) with a minimum of 4-5 years of experience . Recommender Systems and Real-time Predictions: Expertise in developing sophisticated recommender systems, including the application of real-time prediction frameworks. Machine Learning Algorithms: In-depth experience with complex algorithms such as logistic regression, random forest, XGBoost, advanced neural networks, and ensemble methods. Experienced with machine learning algorithms such as logistic regression, random forest, XG boost, KNN, SVM, neural network, linear regression, lasso regression and k-means. Desirable Qualifications: Generative AI Tools Knowledge: Proficiency with tools and platforms for generative AI (such as OpenAI, Hugging Face Transformers). Databricks and Unity Catalog: Experience leveraging Databricks and Unity Catalog for robust data management, model deployment, and tracking. Working experience in CI/CD tools such as GIT & BitBucket

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

Chennai

Work from Office

Naukri logo

Technical Project Manager We are looking for an experienced Technical Project Manager with 5 to 10 years of proven success in overseeing complex project portfolios. The ideal candidate should have a strong foundation in both technical execution and project leadership, with the ability to thrive in a fast-paced, evolving environment . Position: Technical Project Manager (Immediate Joiners Preferred) Experience: 5-10 years No of positions: 1 Location: Chennai, India Key Skills Should have strong project and customer management skills Strong foundational knowledge on cloud platforms Prototyping mindset with a focus on design thinking Ability to handle projects on Web, API, Business Intelligence, Data & Analytics, AIML, ETL & ETL Tools, BI Tools, MSSQL, Bigdata, Cloud SQL, Snowflake, Databricks, Python PMP certification is a Plus Responsibilities End-to-end project and delivery management across the full lifecycle. Collaborate closely with engineering teams to understand technical issues, contribute to solution design, and ensure effective implementation Expertise in Agile-based custom application development. Leadership of cross-functional teams and effective resource planning Strong client and stakeholder engagement, including change management Proficient in risk identification, mitigation, and governance. Skilled in tracking project schedules, resources, and costs. Experienced in coordinating with Cloud/Infrastructure teams for deployments and change requests. Develop and maintain comprehensive project documentation, including project plans, timelines, budgets, and risk assessments Oversee technical resource management, including workload validation, expertise allocation, and onboarding Manage all technical activities outlined in the customer contract, ensuring quality, mitigating risks, and adhering to timelines. Good experience in effectively managing the backlogs using JIRA or other tools

Posted 2 weeks ago

Apply

4.0 - 9.0 years

8 - 12 Lacs

Pune

Work from Office

Naukri logo

Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence We are seeking a Data Engineer with advanced expertise in Databricks SQL, PySpark, Spark SQL, and workflow orchestration using Airflow. The successful candidate will lead critical projects, including migrating SQL Server Stored Procedures to Databricks Notebooks, designing incremental data pipelines, and orchestrating workflows in Azure Databricks What will your job look like Migrate SQL Server Stored Procedures to Databricks Notebooks, leveraging PySpark and Spark SQL for complex transformations. Design, build, and maintain incremental data load pipelines to handle dynamic updates from various sources, ensuring scalability and efficiency. Develop robust data ingestion pipelines to load data into the Databricks Bronze layer from relational databases, APIs, and file systems. Implement incremental data transformation workflows to update silver and gold layer datasets in near real-time, adhering to Delta Lake best practices. Integrate Airflow with Databricks to orchestrate end-to-end workflows, including dependency management, error handling, and scheduling. Understand business and technical requirements, translating them into scalable Databricks solutions. Optimize Spark jobs and queries for performance, scalability, and cost-efficiency in a distributed environment. Implement robust data quality checks, monitoring solutions, and governance frameworks within Databricks. Collaborate with team members on Databricks best practices, reusable solutions, and incremental loading strategies All you need is... Bachelor s degree in computer science, Information Systems, or a related discipline. 4+ years of hands-on experience with Databricks, including expertise in Databricks SQL, PySpark, and Spark SQL. Proven experience in incremental data loading techniques into Databricks, leveraging Delta Lake's features (e.g., time travel, MERGE INTO). Strong understanding of data warehousing concepts, including data partitioning, and indexing for efficient querying. Proficiency in T-SQL and experience in migrating SQL Server Stored Procedures to Databricks. Solid knowledge of Azure Cloud Services, particularly Azure Databricks and Azure Data Lake Storage. Expertise in Airflow integration for workflow orchestration, including designing and managing DAGs. Familiarity with version control systems (e.g., Git) and CI/CD pipelines for data engineering workflows. Excellent analytical and problem-solving skills with a focus on detail-oriented development. Preferred Qualifications Advanced knowledge of Delta Lake optimizations, such as compaction, Z-ordering, and vacuuming. Experience with real-time streaming data pipelines using tools like Kafka or Azure Event Hubs. Familiarity with advanced Airflow features, such as SLA monitoring and external task dependencies. Certifications such as Databricks Certified Associate Developer for Apache Spark or equivalent. Experience in Agile development methodologie Why you will love this job: You will be able to use your specific insights to lead business change on a large scale and drive transformation within our organization. You will be a key member of a global, dynamic and highly collaborative team with various possibilities for personal and professional development. You will have the opportunity to work in multinational environment for the global market leader in its field! We offer a wide range of stellar benefits including health, dental, vision, and life insurance as well as paid time off, sick time, and parental leave!

Posted 2 weeks ago

Apply

10.0 - 15.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities: Lead and manage an offshore team of data engineers, providing strategic guidance, mentorship, and support to ensure the successful delivery of projects and the development of team members. Collaborate closely with onshore stakeholders to understand project requirements, allocate resources efficiently, and ensure alignment with client expectations and project timelines. Drive the technical design, implementation, and optimization of data pipelines, ETL processes, and data warehouses, ensuring scalability, performance, and reliability. Define and enforce engineering best practices, coding standards, and data quality standards to maintain high-quality deliverables and mitigate project risks. Stay abreast of emerging technologies and industry trends in data engineering, and provide recommendations for tooling, process improvements, and skill development. Assume a data architect role as needed, leading the design and implementation of data architecture solutions, data modeling, and optimization strategies. Demonstrate proficiency in AWS services such as: Expertise in cloud data services, including AWS services like Amazon Redshift, Amazon EMR, and AWS Glue, to design and implement scalable data solutions. Experience with cloud infrastructure services such as AWS EC2, AWS S3, to optimize data processing and storage. Knowledge of cloud security best practices, IAM roles, and encryption mechanisms to ensure data privacy and compliance. Proficiency in managing or implementing cloud data warehouse solutions, including data modeling, schema design, performance tuning, and optimization techniques. Demonstrate proficiency in modern data platforms such as Snowflake and Databricks, including: Deep understanding of Snowflake's architecture, capabilities, and best practices for designing and implementing data warehouse solutions. Hands-on experience with Databricks for data engineering, data processing, and machine learning tasks, leveraging Spark clusters for scalable data processing. Ability to optimize Snowflake and Databricks configurations for performance, scalability, and cost-effectiveness. Manage the offshore team's performance, including resource allocation, performance evaluations, and professional development, to maximize team productivity and morale. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field; advanced degree preferred. 10+ years of experience in data engineering, with a proven track record of leadership and technical expertise in managing complex data projects. Proficiency in programming languages such as Python, Java, or Scala, as well as expertise in SQL and relational databases (e.g., PostgreSQL, MySQL). Strong understanding of distributed computing, cloud technologies (e.g., AWS), and big data frameworks (e.g., Hadoop, Spark). Experience with data architecture design, data modeling, and optimization techniques. Excellent communication, collaboration, and leadership skills, with the ability to effectively manage remote teams and engage with onshore stakeholders. Proven ability to adapt to evolving project requirements and effectively prioritize tasks in a fast-paced environment.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

9 - 18 Lacs

Mumbai, Navi Mumbai

Work from Office

Naukri logo

About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cuttingedge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Data Scientist with deep expertise in time series forecasting, particularly in demand forecasting and customer lifecycle analytics (CLV). The ideal candidate will be proficient in Python or PySpark, have hands-on experience with tools like Prophet and ARIMA, and be comfortable working in Databricks environments. Familiarity with classic ML models and optimization techniques is a plus. Key Responsibilities • Develop, deploy, and maintain time series forecasting models (Prophet, ARIMA, etc.) for demand forecasting and customer behavior modeling. • Design and implement Customer Lifetime Value (CLV) models to drive customer retention and engagement strategies. • Process and analyze large datasets using PySpark or Python (Pandas). • Partner with cross-functional teams to identify business needs and translate them into data science solutions. • Leverage classic ML techniques (classification, regression) and boosting algorithms (e.g., XGBoost, LightGBM) to support broader analytics use cases. • Use Databricks for collaborative development, data pipelines, and model orchestration. • Apply optimization techniques where relevant to improve forecast accuracy and business decision-making. • Present actionable insights and communicate model results effectively to technical and non-technical stakeholders. Required Qualifications • Strong experience in Time Series Forecasting, with hands-on knowledge of Prophet, ARIMA, or equivalent Mandatory. • Proven track record in Demand Forecasting Highly Preferred. • Experience in modeling Customer Lifecycle Value (CLV) or similar customer analytics use cases Highly Preferred. • Proficiency in Python (Pandas) or PySpark Mandatory. • Experience with Databricks Mandatory. • Solid foundation in statistics, predictive modeling, and machine learning

Posted 2 weeks ago

Apply

6.0 - 8.0 years

10 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description: We are looking for a highly experienced and dynamic Senior Data Manager / Lead to oversee a team of Data Engineers and Data Scientists. This role demands a strong background in data platforms such as Snowflake and proficiency in Python , combined with excellent people management and project leadership skills . While hands-on experience in the technologies is beneficial, the primary focus of this role is on team leadership, strategic planning , and project delivery . Job Title : Senior Data Manager / Lead Location: Hyderabad (Work From Office) Shift Timing: 10AM-7PM Key Responsibilities : Lead, mentor, and manage a team of Data Engineers and Data Scientists. Oversee the design and implementation of data pipelines and analytics solutions using Snowflake and Python. Collaborate with cross-functional teams (product, business, engineering) to align data solutions with business goals. Ensure timely delivery of projects, with high quality and performance. Conduct performance reviews, training plans, and support career development for the team. Set priorities, allocate resources, and manage workloads within the data team. Drive adoption of best practices in data management, governance, and documentation. Evaluate new tools and technologies relevant to data engineering and data science. Required Skills & Qualifications: 6+ years of experience in data-related roles, with at least 23 years in a leadership or management position. Strong understanding of Snowflake architecture, performance tuning, data sharing, security, etc. Solid knowledge of Python for data engineering or data science tasks. Experience in leading data migration , ETL/ELT , and analytics projects. Ability to translate business requirements into technical solutions. Excellent leadership, communication, and stakeholder management skills. Exposure to tools like Databricks , Dataiku , Airflow , or similar platforms is a plus. Bachelors or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.

Posted 2 weeks ago

Apply

12.0 - 17.0 years

20 - 25 Lacs

Noida

Work from Office

Naukri logo

Position Summary Overall 12+ years of quality engineering experience with DWH/ETL for enterprise grade applications Hands on experience with functional, non-functional and automation of products Hands on experience with leverage LLMs/GenAI for improving efficiency & effectiveness of overall delivery process Job Responsibilities Leading end-to-end QE for product suite Authoring QE test strategy for a release and executing it for a release Driving quality releases by closely working with development, PMs, DevOps, support and business teams Achieving automation coverage for product suite with good line coverage Manage risks and resolves issues that affect release scope, schedule and quality Work with product teams to understand impacts of branches and code merges, etc. Lead and co-ordinate the release activities including the execution of overall Ability to lead team of SDETs and help them in addressing their issues Mentoring and coaching members in the team Education BE/B.Tech Master of Computer Application Work Experience Overall 12+ years of strong hands-on experience with DWH/ETL for enterprise grade applications Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Lifescience Knowledge AWS Data Pipeline Azure Data Factory Data Governance Data Modelling Data Privacy Data Security Data Validation Testing Tools Data Visualisation Databricks Snowflake Amazon Redshift MS SQL Server Performance Testing

Posted 2 weeks ago

Apply

8.0 - 10.0 years

10 - 20 Lacs

Kolkata, Hyderabad, Pune

Work from Office

Naukri logo

Must have -Azure Data Factory (Mandatory). Azure Databricks, Pyspark and Python and advance SQL Azure eco-system. 1) Advanced SQL Skills. 2)Data Analysis. 3) Data Models. 4) Python (Desired). 5) Automation - Experience required : 8 to 10 years.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Urgently Hiring for Senior Azure Data Engineer Job Location- Bangalore Minimum exp - Total 7+yrs with min 4 years relevant exp Keywords Databricks, Pyspark, SCALA, SQL, Live / Streaming data, batch processing data Share CV siddhi.pandey@adecco.com OR Call 6366783349 Roles and Responsibilities: The Data Engineer will work on data engineering projects for various business units, focusing on delivery of complex data management solutions by leveraging industry best practices. They work with the project team to build the most efficient data pipelines and data management solutions that make data easily available for consuming applications and analytical solutions. A Data engineer is expected to possess strong technical skills Key Characteristics Technology champion who constantly pursues skill enhancement and has inherent curiosity to understand work from multiple dimensions Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution Has worked on real data challenges and handled high volume, velocity, and variety of data. Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges. Contributes to community building initiatives like CoE, CoP. Mandatory skills: Azure - Master ELT - Skill Data Modeling - Skill Data Integration & Ingestion - Skill Data Manipulation and Processing - Skill GITHUB, Action, Azure DevOps - Skill Data factory, Databricks, SQL DB, Synapse, Stream Analytics, Glue, Airflow, Kinesis, Redshift, SonarQube, PyTest - Skill Optional skills: Experience in project management, running a scrum team. Experience working with BPC, Planning. Exposure to working with external technical ecosystem. MKDocs documentation Share CV siddhi.pandey@adecco.com OR Call 6366783349

Posted 2 weeks ago

Apply

3.0 - 8.0 years

12 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Information Technology Group, Information Technology Group > Systems Analysis General Summary: We are seeking a Systems Analyst,Senior to join our growing organization with specialized skills in IBM Planning Analytics/TM1 and functional understanding of Finance budgeting and forecasting. This role involves advanced development, troubleshooting, and implementation of TM1 solutions to meet complex business requirements.The person will be part of Finance Planning and reporting team and will primarily work closely with his/her manager and will be helping in delivering TM1 planning and budgeting roadmap for the global stakeholders.Key Responsibilities: Able to design and develop IBM Planning Analytics(TM1) solutions as per standards. Able to write logical, complex, concise, efficient, and well-documented code for both TM1 rules and Turbo Integrator processes. Good to have knowledge of Python and TM1py libraries. Able to write business requirement specifications, define level of efforts for Projects/Enhancements and should design and coordinate system tests to ensure solutions meet business requirements SQL skills to be able to work with source data and understand source data structures. Good understanding of the SQL and ability to write complex queries. Understanding cloud technologies especially AWS and Databricks will be an added advantage. Experience in client reporting and dashboard tools like Tableau, PA Web,PAFE. Understanding of ETL processes and data manipulation Working independently with little supervision Taking responsibility for own work and making decisions that are moderate in impact; errors may have financial impact or effect on projects, operations, or customer relationships; errors may require involvement beyond immediate work group to correct. Should provide ongoing system support, including troubleshooting and resolving issues to ensure optimal system performance and reliability Using verbal and written communication skills to convey information that may be complex to others who may have limited knowledge of the subject in question Using deductive and inductive problem solving; multiple approaches may be taken/necessary to solve the problem; often information is missing or incomplete; intermediate data analysis/interpretation skills may be required. Exercising substantial creativity to innovate new processes, procedures, or work products within guidelines or to achieve established objectives. Minimum Qualifications: 3+ years of IT-relevant work experience with a Bachelor's degree. OR 5+ years of IT-relevant work experience without a Bachelors degree. Qualifications:The ideal candidate will have 8-10 years of experience in designing, modeling, and developing enterprise performance management (EPM) applications using IBM Planning Analytics (TM1).Able to design and develop IBM Planning Analytics(TM1) solutions as per standards. Able to write logical, complex, concise, efficient, and well-documented code for both TM1 rules and Turbo Integrator processes.Lead the design, modeling, and development of TM1 applications, including TI scripting, MDX, rules, feeders, and performance tuning.Should able to provide technical expertise in identifying, evaluating, and developing systems and procedures that are efficient, cost effective and meet user requirements.Plans and executes unit, integration and acceptance testingMust be a good team player who can work seamlessly with Global teams and Data teamsExcellent communication and collaboration skills to work with business stakeholdersHaving functional understanding of Finance budgeting and forecasting Understanding cloud technologies especially AWS and Databricks will be an added advantageExperience in Agile methodologies and JIRA user storiesAble to design and develop solutions using python as per standards we are seeking a Systems Analyst,Senior to join our growing organization with specialized skills in IBM Planning Analytics/TM1 and functional understanding of Finance budgeting and forecasting.The person will be part of Finance Planning and reporting te Required bachelors or masters degree in information science, computer science, business, or equivalent work experience.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies