Home
Jobs
Companies
Resume

5317 Pyspark Jobs - Page 48

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 - 14.0 years

10 - 16 Lacs

Chennai

Work from Office

Naukri logo

Azure Data Bricks, Data Factory, Pyspark, Sql If Your Interst in this position Attached your CV to this Mail ID muniswamyinfyjob@gmail.com

Posted 1 week ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a highly skilled and experienced Lead Data Analyst to lead data initiatives and deliver actionable insights that drive strategic decisions. The ideal candidate will have deep expertise in data analytics, cloud data platforms, and modern data engineering tools including Databricks, Azure Data Factory, and PySpark. This role requires solid leadership, technical proficiency, and excellent communication skills to collaborate across teams and influence business outcomes. Primary Responsibilities Lead the design and execution of complex data analysis projects to support business strategy and operations Build and optimize data pipelines using Azure Data Factory and Databricks Perform advanced data analysis and modeling using PySpark, SQL, and Python Develop and maintain dashboards and reports using tools like Power BI, Tableau, or Looker Collaborate with data engineers, product managers, and business stakeholders to define data requirements and deliver insights Ensure data quality, governance, and compliance across all analytics initiatives Mentor junior analysts and foster a data-driven culture within the organization Present findings and recommendations to senior leadership in a clear and compelling manner Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of experience in data analytics, business intelligence, or a related field Hands-on experience with Databricks and Azure Data Factory Experience with data visualization tools such as Power BI, Tableau, or Looker Proficiency in SQL, Python, and PySpark for data manipulation and analysis Solid understanding of data warehousing, ETL processes, and cloud data platforms (Azure, AWS, or GCP) Proven excellent analytical thinking, problem-solving, and communication skills Proven ability to lead projects and influence stakeholders through data storytelling Preferred Qualifications Certifications in Azure or other cloud platforms Experience with big data technologies (e.g., Spark, Hadoop) Knowledge of machine learning concepts and tools Familiarity with Agile methodologies and project management tools At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

17 - 25 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Call: 7738402343 Mail: divyani@contactxindia.com Role & responsibilities Snowflake with Python, Dayshift More than 8 years of IT Experience, specifically in data Engineering stream Should possess Developmental skills in Snowflake, basic IBM Datastage/ any other ETL tool, SQL (expert ), basics of Python/Pyspark, AWS along with high proficiency in Oracle SQL Hands on Experience in handling databases along with experience in any scheduling tool like Ctrl-M, Control - M Excellent customer service, interpersonal, communication and team collaboration skills Excellent in debugging skills in Databases and should have played a key member role in earlier projects. Excellent in SQL and PL/SQL coding (development. Ability to identify and implement process and/or application improvements Must be able to work on multiple simultaneous tasks with limited supervision Able to follow change management procedures and internal guidelines Any relevant technical Certifications in data stage is a plus Preferred candidate profile

Posted 1 week ago

Apply

8.0 - 12.0 years

16 - 30 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

We are looking for an experienced Senior Software Engineer with deep expertise in Spark SQL / SQL development to lead the design, development, and optimization of complex database systems. As a Senior Spark SQL/SQL Developer, you will play a key role in creating and maintaining high-performance, scalable database solutions that meet business requirements and support critical applications. You will collaborate with engineering teams, mentor junior developers, and drive improvements in database architecture and performance. Key Responsibilities: Design, develop, and optimize complex Spark SQL / SQL queries, stored procedures, views, and triggers for high-performance systems. Lead the design and implementation of scalable database architectures to meet business needs. Perform advanced query optimization and troubleshooting to ensure database performance, efficiency, and reliability. Mentor junior developers and provide guidance on best practices for SQL development, performance tuning, and database design. Collaborate with cross-functional teams, including software engineers, product managers, and system architects, to understand requirements and deliver robust database solutions. Conduct code reviews to ensure code quality, performance standards, and compliance with database design principles. Develop and implement strategies for data security, backup, disaster recovery, and high availability. Monitor and maintain database performance, ensuring minimal downtime and optimal resource utilization. Contribute to long-term technical strategies for database management and integration with other systems. Write and maintain comprehensive documentation on database systems, queries, and architecture. Required Skills & Qualifications: Experience: 7+ years of hands-on experience in SQL Developer / data engineering or a related field. Expert-level proficiency in Spark SQL and extensive experience with Bigdata (Hive), MPP (Teradata), relational databases such as SQL Server, MySQL, or Oracle. Strong experience in database design, optimization, and troubleshooting. Deep knowledge of query optimization, indexing, and performance tuning techniques. Strong understanding of database architecture, scalability, and high-availability strategies. Experience with large-scale, high-transaction databases and data warehousing. Strong problem-solving skills with the ability to analyze complex data issues and provide effective solutions. Data testing and data reconciliation Ability to mentor and guide junior developers and promote best practices in SQL development. Proficiency in database migration, version control, and integration with applications. Excellent communication and collaboration skills, with the ability to interact with both technical and non-technical stakeholders. Preferred Qualifications: Experience with NoSQL databases (e.g., MongoDB, Cassandra) and cloud-based databases (e.g., AWS RDS, Azure SQL Database). Familiarity with data analytics, ETL processes, and data pipelines. Experience in automation tools, CI/CD pipelines, and agile methodologies. Familiarity with programming languages such as Python, Java, or C#. Education: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Description Core Skills Secondary Skills Bachelor's in Computer Science, Computer Engineering or related field 5+ yrs. Development experience with Spark (PySpark), Python and SQL. Extensive knowledge building data pipelines Hands on experience with Databricks Devlopment Strong experience with Strong experience developing on Linux OS. Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Solid understanding of distributed systems, data structures, design principles. Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). Comfortable communicating with teams via showcases/demos. Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. Actively migrate use cases from our on premises Data Lake to Databricks on GCP. Collaborate with Product Management and business partners to understand use case requirements and reporting. Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . Document and showcase feature designs/workflows. Participate in team meetings and discussions around product development. Stay up to date on industry latest industry trends and design patterns. 3+ years experience with GIT. 3+ years experience with CI/CD (e.g. Azure Pipelines). Experience with streaming technologies, such as Kafka, Spark. Experience building applications on Docker and Kubernetes. Cloud experience (e.g. Azure, Google). Your future duties and responsibilities Required Qualifications To Be Successful In This Role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

2 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Exp with Azure cloud data warehouses, Azure & NoSQL databases. Experience with Azure Data Lake, Data Warehousing, and DevOps pipelines (CI/CD)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Bachelor's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Primary Responsibilities: Design, code, test, document, and maintain high-quality and scalable data pipelines/solutions in cloud Work in both dev and ops and should be open to work in ops with flexible timings in ops Ingest and transform data using variety of technologies from variety of sources (APIs, streaming, Files, Databases) Develop reusable patterns and encourage innovation that will increase team’s velocity Design and develop applications in an agile environment, deploy using CI/CD Participate with prototypes as well as design and code reviews, own or assist with incident and problem management Self-starter who can learn things quickly, who is enthusiastic and actively engaged Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's degree in technical domain. Required experience with the following: Databricks, Python, Spark, pyspark, SQL, Azure Data factory Design and Implementation of Datawarehouse/Datalake (Databricks/snowflake) Data architecture, Data modelling Operations Processes, reporting from operations, Incident resolutions Github actions/Jenkins or similar CICD tool, Cloud CICD, GitHub NoSQL and relational databases Preferred Qualifications: Experience or knowledge in Apache Kafka Experience or knowledge in Data ingestions from variety of API’s Working in Agile/Scrum environment

Posted 1 week ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) along with GCP. Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.

Posted 1 week ago

Apply

5.0 - 9.0 years

11 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

5 to 9 years experience Nice to have Worked in hp eco system (FDL architecture) Databricks + SQL combination is must EXPERIENCE 6-8 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): databricks, SQL

Posted 1 week ago

Apply

5.0 - 10.0 years

27 - 37 Lacs

Pune

Work from Office

Naukri logo

Excellent in SDLC Processes Ability to participate in deep technical discussions with the customers and elicit the requirements. Ability to work on tools including Java/Scala/Python/Spark/SQL Imm - 30 days joiners

Posted 1 week ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are looking for highly motivated expert Senior Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support structured, semi-structured, and unstructured data processing across the Enterprise Data Fabric. Implement real-time and batch data processing solutions, integrating data from multiple sources into a unified, governed data fabric architecture. Optimize big data processing frameworks using Apache Spark, Hadoop, or similar distributed computing technologies to ensure high availability and cost efficiency. Work with metadata management and data lineage tracking tools to enable enterprise-wide data discovery and governance. Ensure data security, compliance, and role-based access control (RBAC) across data environments. Optimize query performance, indexing strategies, partitioning, and caching for large-scale data sets. Develop CI/CD pipelines for automated data pipeline deployments, version control, and monitoring. Implement data virtualization techniques to provide seamless access to data across multiple storage systems. Collaborate with cross-functional teams, including data architects, business analysts, and DevOps teams, to align data engineering strategies with enterprise goals. Stay up to date with emerging data technologies and best practices, ensuring continuous improvement of Enterprise Data Fabric architecture. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4to 6 years of Computer Science, IT or related field experience OR Bachelors degree and 6 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Preferred Qualifications: Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Experience with Data Fabric, Data Mesh, or similar enterprise-wide data architectures. Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and collaboration skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 week ago

Apply

3.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Manager Information Systems Automation What you will do We are seeking a hands-on , experienced and dynamic Technical Infrastructure Automation Manager to lead and manage our infrastructure automation initiatives. The ideal candidate will have a strong hands-on background in IT infrastructure, cloud services, and automation tools, along with leadership skills to guide a team towards improving operational efficiency, reducing manual processes, and ensuring scalability of systems. This role will lead a team of engineers across multiple functions, including Ansible Development, ServiceNow Development, Process Automation, and Site Reliability Engineering (SRE). This role will be responsible for ensuring the reliability, scalability, and security of automation services. The Infrastructure Automation team will be responsible for automating infrastructure provisioning, deployment, configuration management, and monitoring. You will work closely with development, operations, and security teams to drive automation solutions that enhance the overall infrastructures efficiency and reliability. This role demands the ability to drive and deliver against key organizational strategic initiatives, foster a collaborative environment, and deliver high-quality results in a matrixed organizational structure. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: Automation Strategy & Leadership : Lead the development and implementation of infrastructure automation strategies. Collaborate with key collaborators (DevOps, IT Operations, Security, etc.) to define automation goals and ensure alignment with company objectives. Provide leadership and mentorship to a team of engineers, ensuring continuous growth and skill development. Infrastructure Automation : Design and implement automation frameworks for infrastructure provisioning, configuration management, and orchestration (e.g., using tools like Terraform, Ansible, Puppet, Chef, etc.). Manage and optimize CI/CD pipelines for infrastructure as code (IaC) to ensure seamless delivery and updates. Work with cloud providers (AWS, Azure, GCP) to implement automation solutions for managing cloud resources and services. Process Improvement : Identify areas for process improvement by analyzing current workflows, systems, and infrastructure operations. Create and implement solutions to reduce operational overhead and increase system reliability, scalability, and security. Automate and streamline recurring tasks, including patch management, backups, and system monitoring. Collaboration & Communication : Collaborate with multi-functional teams (Development, IT Operations, Security, etc.) to ensure infrastructure automation aligns with business needs. Regularly communicate progress, challenges, and successes to management, offering insights on how automation is driving efficiencies. Documentation & Standards : Maintain proper documentation for automation scripts, infrastructure configurations, and processes. Develop and enforce best practices and standards for automation and infrastructure management. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree with 8-10 years of experience in Observability operation, with at least 3 years in management OR Bachelor's degree with 10-14years of experience in Observability Operations, with at least 4 years in management OR Diploma with 14-18 years of experience in Observability Operations, with at least 5 years in management 12+ years of experience in IT infrastructure management, with at least 4+ years in a leadership or managerial role. Strong expertise in automation tools and frameworks such as Terraform, Ansible, Chef, Puppet, or similar. Proficiency in scripting languages (e.g., Python, Bash, PowerShell). Hands-on experience with cloud platforms (AWS) and containerization technologies (Docker, Kubernetes). Hands-on of Infrastructure as Code (IaC) principles and CI/CD pipeline implementation. Experience with ServiceNow Development and Administration Solid understanding of networking, security protocols, and infrastructure design. Excellent problem-solving skills and the ability to troubleshoot complex infrastructure issues. Strong leadership and communication skills, with the ability to work effectively across teams. Professional Certifications (Preferred): ITIL or PMP Certification Red Hat Certified System Administrator Service Now Certified System Administrator AWS Certified Solutions Architect Preferred Qualifications: Strong experience with Ansible, including playbooks, roles, and modules. Strong experience with infrastructure-as-code concepts and other automation tools like Terraform or Puppet. Strong understanding of user-centered design and building scalable, high-performing web and mobile interfaces on the ServiceNow platform Proficiency with both Windows and Linux/Unix-based operating systems. Knowledge of cloud platforms (AWS, Azure, Google Cloud) and automation techniques in those environments. Familiarity with CI/CD tools and processes, particularly with integration of Ansible in pipelines. Understanding of version control systems (Git). Strong troubleshooting, debugging, and performance optimization skills. Experience with hybrid cloud environments and multi-cloud strategies. Familiarity with DevOps practices and tools. Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Soft Skills: Excellent leadership and team management skills. Change management expertise Crisis management capabilities Strong presentation and public speaking skills Analytical mindset with a focus on continuous improvement. Detail-oriented with the capacity to manage multiple projects and priorities. Self-motivated and able to work independently or as part of a team. Strong communication skills to effectively interact with both technical and non-technical collaborators. Ability to work effectively with global, virtual teams Shift Information: This position is an onsite role and may require working during later hours to align with business hours. Candidates must be willing and able to work outside of standard hours as required to meet business needs.

Posted 1 week ago

Apply

9.0 - 12.0 years

14 - 19 Lacs

Noida, Mumbai, Bengaluru

Work from Office

Naukri logo

Job Type: Full Time Job Location: Pune / Mumbai/ Bangalore/ Noida Total Experience: 9 to 12 Years Job Description 8 years of relevant work experience Fully conversant with big-data processing approaches and schema-on-read methodologies are a must and knowledge of Azure Data Factory / Azure Databricks (PySpark) / Azure Data Lake Storage (ADLS Gen 2) / Azure Synapse is a must Good to have an excellent development skills & extensive hand-on development & coding experience in a variety of languages, e.g., Python (compulsory PySpark), SQL, Power BI DAX (good to have), etc Experience in designing solutions for Cloud Data Warehouse and working with Architect for generic implementation design of ETL / ELT Pipelines Experience implementing robust ETL / ELT pipelines within Azure Data Factory / Azure Synapse Pipelines along with error handling and performance improvements, identifying performance bottlenecks Should have knowledge of designing Reporting Framework layer for Power BI / Tableau. Roles and Responsibilities Experience in Designing, implementing, and delivering solutions on batch and streaming data Good to have Experience in working on Data Warehouse using different data modelling techniques (Kimball, SCD2 (Slowly Changing Dimensions)) Work in a collaborative and Agile environment to deliver data products Adapting new technology advancements and learning new things on Microsoft Azure and competing technologies no hands-on experience needed, but need to know comparisons Ensuring that client deliveries are made on time, with quality Coordinate with Engineering Team, Architect and Client through PM for delivery Align with organizations vision for data practices and work towards improvising skills as needed

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 27 Lacs

Pune

Work from Office

Naukri logo

Mandatory Primary Skills: Python, Pyspark & SQLSecondary Skills: Any Cloud exp, DWH, BI tools (Qlik, PowerBI etc.)

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 27 Lacs

Pune

Work from Office

Naukri logo

Mandatory Primary Skills: Python, Pyspark & SQLSecondary Skills: Any Cloud exp, DWH, BI tools (Qlik, PowerBI etc.)

Posted 1 week ago

Apply

5.0 - 10.0 years

27 - 37 Lacs

Hyderabad

Work from Office

Naukri logo

Excellent in SDLC Processes Ability to participate in deep technical discussions with the customers and elicit the requirements. Required Candidate profile Ability to work on tools including Java/Scala/Python/Spark/SQL Imm - 30 days joiners

Posted 1 week ago

Apply

5.0 - 10.0 years

27 - 37 Lacs

Bangalore Rural

Work from Office

Naukri logo

Excellent in SDLC Processes Ability to participate in deep technical discussions with the customers and elicit the requirements. Required Candidate profile Ability to work on tools including Java/Scala/Python/Spark/SQL Imm - 30 days joiners

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for a candidate seasoned in handling Data Warehousing challenges. Someone who enjoys learning new technologies and does not hesitate to bring his/her perspective to the table. We are looking for someone who is enthusiastic about working in a team, can own and deliver long-term projects to completion. Responsibilities: • Contribute to the teams vision and articulate strategies to have fundamental impact at our massive scale. • You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs. • Diagnose and solve complex problems in distributed systems, develop and document technical solutions and sequence work to make fast, iterative deliveries and improvements. • Build and maintain high-performance, fault-tolerant, and scalable distributed systems that can handle our massive scale. • Provide solid leadership within your very own problem space, through data-driven approach, robust software designs, and effective delegation. • Participate in, or spearhead design reviews with peers and stakeholders to adopt what’s best suited amongst available technologies. • Review code developed by other developers and provided feedback to ensure best practices (e.g., checking code in, accuracy, testability, and efficiency) • Automate cloud infrastructure, services, and observability. • Develop CI/CD pipelines and testing automation (nice to have) • Establish and uphold best engineering practices through thorough code and design reviews and improved processes and tools. • Groom junior engineers through mentoring and delegation • Drive a culture of trust, respect, and inclusion within your team. Minimum Qualifications: • Bachelor’s degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience • Min 5 years of experience curating data and hands on experience working on ETL/ELT tools. • Strong overall programming skills, able to write modular, maintainable code, preferably Python & SQL • Strong Data warehousing concepts and SQL skills. Understanding of SQL, dimensional modelling, and at least one relational database • Experience with AWS • Exposure to Snowflake and ingesting data in it or exposure to similar tools • Humble, collaborative, team player, willing to step up and support your colleagues. • Effective communication, problem solving and interpersonal skills. • Commit to grow deeper in the knowledge and understanding of how to improve our existing applications. Preferred Qualifications: • Experience on following tools – DBT, Fivetran, Airflow • Knowledge and experience in Spark, Hadoop 2.0, and its ecosystem. • Experience with automation frameworks/tools like Git, Jenkins Primary Skills Snowflake, Python, SQL, DBT Secondary Skills Fivetran, Airflow,Git, Jenkins, AWS, SQL DBM

Posted 1 week ago

Apply

6.0 - 9.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Snowflake Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile 4+ years of experience in data warehousing, and cloud data solutions. Minimum 2+ years of hands-on experience with End-to-end Snowflake implementation. Experience in developing data architecture and roadmap strategies with knowledge to establish data governance and quality frameworks within Snowflake Expertise or strong knowledge in Snowflake best practices, performance tuning, and query optimisation. Experience with cloud platforms like AWS or Azure and familiarity with Snowflakes integration with these environments. Strong knowledge in at least one cloud (AWS or Azure) is mandatory Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Azure Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming. Experience with Azure Databricks/ADB Experience with building CI/CD pipelines in Data environments Primary Skills ADF (Azure Data Factory) OR ADB (Azure Data Bricks) Secondary Skills Excellent verbal and written communication and interpersonal skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in GCP Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Minimum 4 years' experience in GCP Data Engineering. Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. Should have worked on handling big data. Strong communication skills. experience in Agile methodologies ETL, ELT skills, Data movement skills, Data processing skills. Certification on Professional Google Cloud Data engineer will be an added advantage. Proven analytical skills and Problem-solving attitude Ability to effectively function in a cross-teams environment. Primary Skills GCP, data engineering.Java/ Python/ Spark on GCP, Programming experience in any one language - either Python or Java or PySpark. GCS (Cloud Storage), Composer (Airflow) and BigQuery experience. Experience building data pipelines using above skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

4.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more . Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know Our Team The Data department , based in Bangkok , oversees all of Agoda’s data-related requirements. Our ultimate goal is to enable and increase the use of data in the company through creative approaches and the implementation of powerful resources such as operational and analytical databases, queue systems, BI tools, and data science technology. We hire the brightest minds from around the world to take on this challenge and equip them with the knowledge and tools that contribute to their personal growth and success while supporting our company’s culture of diversity and experimentation. The role the Data team plays at Agoda is critical as business users, product managers, engineers, and many others rely on us to empower their decision making. We are equally dedicated to our customers by improving their search experience with faster results and protecting them from any fraudulent activities. Data is interesting only when you have enough of it, and we have plenty. This is what drives up the challenge as part of the Data department, but also the reward. The Opportunity Please note -The role will be based in Bangkok. We are looking for ambitious and agile data scientists that would like to seize the opportunity to work on some of the most challenging productive machine learning and big data platforms worldwide, processing some 600B events every day and making some 5B predictions. As part of the Data Science and Machine Learning (AI/ML) team you will be exposed to real-world challenges such as: dynamic pricing, predicting customer intents in real time, ranking search results to maximize lifetime value, classifying and deep learning content and personalization signals from unstructured data such as images and text, making personalized recommendations, innovating algorithm-supported promotions and products for supply partners, discovering insights from big data, and innovating the user experience. To tackle these challenges, you will have the opportunity to work on one of the world’s largest ML infrastructure employing dozens of GPUs working in parallel, 30K+ CPU cores and 150TB of memory. In This Role, You’ll Get to Design, code, experiment and implement models and algorithms to maximize customer experience, supply side value, business outcomes, and infrastructure readiness Mine a big data of hundreds of millions of customers and more than 600M daily user generated events, supplier and pricing data, and discover actionable insights to drive improvements and innovation Work with developers and a variety of business owners to deliver daily results with the best quality Research discover and harness new ideas that can make a difference What You’ll Need To Succeed 4+ years hands-on data science experience Excellent understanding of AI/ML/DL and Statistics, as well as coding proficiency using related open source libraries and frameworks Significant proficiency in SQL and languages like Python, PySpark and/or Scala Can lead, work independently as well as play a key role in a team Good communication and interpersonal skills for working in a multicultural work environment It’s Great if You Have PhD or MSc in Computer Science / Operations Research / Statistics or other quantitative fields Experience in NLP, image processing and/or recommendation systems Hands on experience in data engineering, working with big data framework like Spark/Hadoop Experience in data science for e-commerce and/or OTA We welcome both local and international applications for this role. Full visa sponsorship and relocation assistance available for eligible candidates. #sanfrancisco #sanjose #losangeles #sandiego #oakland #denver #miami #orlando #atlanta #chicago #boston #detroit #newyork #portland #philadelphia #dallas #houston #austin #seattle #sydney #melbourne #perth #toronto #vancouver #montreal #shanghai #beijing #shenzhen #prague #Brno #Ostrava #cairo #alexandria #giza #estonia #paris #berlin #munich #hamburg #stuttgart #cologne #frankfurt #hongkong #budapest #jakarta #bali #dublin #telaviv #milan #rome #venice #florence #naples #turin #palermo #bologna #tokyo #osaka #kualalumpur #malta #amsterdam #oslo #manila #warsaw #krakow #doha #alrayyan #riyadh #jeddah #mecca #medina #singapore #seoul #barcelona #madrid #stockholm #zurich #taipei #tainan #taichung #kaohsiung #bangkok #Phuket #istanbul #london #manchester #edinburgh #hcmc #hanoi #lodz #wroclaw #poznan #katowice #rio #salvador #newdelhi #bangalore #bandung #yokohama #nagoya #okinawa #fukuoka #jerusalem #mumbai #bengalulu #hyderabad #pune # #IT #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy . Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Chennai

Remote

Naukri logo

Role & responsibilities Develop, maintain, and enhance new data sources and tables, contributing to data engineering efforts to ensure comprehensive and efficient data architecture. Serves as the liaison between Data Engineer team and the Airport operation teams, developing new data sources and overseeing enhancements to existing database; being one of the main contact points for data requests, metadata, and statistical analysis Migrates all existing Hive Metastore tables to Unity Catalog, addressing access issues and ensuring smooth transition of jobs and tables. Collaborate with IT teams to validate package (gold level data) table outputs during the production deployment of developed notebooks Develop and implement data quality alerting systems and Tableau alerting mechanisms for dashboards, setting up notifications for various thresholds. Create and maintain standard reports and dashboards to provide insights into airport performance, helping guide stations to optimize operations and improve performance. Preferred candidate profile Master's degree / UG Min 5 -10 years of experience Databricks (Azur op) Good Communication Experience developing solutions on a Big Data platform utilizing tools such as Impala and Spark Advanced knowledge/experience with Azure Databricks, PySpark , ( Teradata )/Databricks SQL Advanced knowledge/experience in Python along with associated development environments (e.g. JupyterHub, PyCharm, etc.) Advanced knowledge/experience in building Tableau Dashboard / Clikview / PowerBi Basic idea on HTML and JavaScript Immediate Joiner Skills, Licenses & Certifications Strong project management skills Proficient with Microsoft Office applications (MS Excel, Access and PowerPoint); advanced knowledge of Microsoft Excel Advanced aptitude in problem-solving, including the ability to logically structure an appropriate analytical framework Proficient in SharePoint, PowerApp and ability to use Graph API

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies