Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 years
0 Lacs
Indore
On-site
Project Role : Custom Software Engineer Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Custom Software Engineer, you will develop custom software solutions to design, code, and enhance components across systems or applications. Your typical day will involve collaborating with cross-functional teams to understand business requirements, utilizing modern frameworks and agile practices to deliver scalable and high-performing solutions tailored to specific business needs. You will engage in problem-solving activities, ensuring that the software solutions meet the highest standards of quality and performance while adapting to evolving project requirements. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve software development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with modern software development methodologies, particularly Agile. - Familiarity with cloud platforms and services for deploying applications. - Ability to troubleshoot and optimize performance in software applications. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required. 15 years full time education
Posted 3 days ago
7.0 - 10.0 years
2 - 9 Lacs
Noida
On-site
Posted On: 30 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We need a Sr. Databricks Dev – 7 to 10 years Core skills : Databricks – Level: Advanced SQL (MSSQL Server) – Joins, SQ optimization, basic knowledge of StoredProcedure, Functions PySpark – Level: Advanced Azure Delta lake Python – Basic Mandatory Competencies Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 3 days ago
15.0 years
0 Lacs
Calcutta
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Apache Spark Good to have skills : Java, Scala, PySpark Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in analyzing requirements, proposing solutions, and ensuring that the data platform aligns with organizational goals and standards. Your role will require you to stay updated with industry trends and best practices to contribute effectively to the team. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Engage in continuous learning to stay abreast of emerging technologies and methodologies. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with Java, Scala, PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with data integration tools and techniques. - Familiarity with cloud platforms and services related to data engineering. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Kolkata office. - A 15 years full time education is required. 15 years full time education
Posted 3 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Mandatory : Proficiency in Python with experience in Databricks (PySpark) Good to Have : Hands-on experience with Apache Airflow. Working knowledge of PostgreSQL, MongoDB. Basic experience on cloud technologies like Azure, AWS and Google.
Posted 3 days ago
5.0 years
0 Lacs
Andhra Pradesh
On-site
ETL Lead looking for a Senior ETL Developer in our Enterprise Data Warehouse. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank.The Senior ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and/or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 5 years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry. Mandatory Skills ETL - Datawarehouse concepts Snowflake CI/CD Tools (Jenkins, GitHub) python Datastage About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 3 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Min Experience:4.0 years Max Experience:8.0 years Skills:Kubenetes,Pyspark, docker, Gitlab,dbt, python,Reliability, Angular 2,Grafana,AWS,Monitoring and Observability Location:PuneJob description: Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” - long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills: · 4-8 years of overall experience · Strong programming experience with Python and ability to write modular code following best practices in python which is backed by unit tests with high degree of coverage. · Knowledge of source control(Git/Gitlabs) · Understanding of deployment patterns along with knowledge of CI/CD and build tools · Knowledge of Kubernetes concepts and commands is a must · Knowledge of monitoring and alerting tools like Grafana, Open telemetry is a must · Knowledge of Astro/Airflow is plus · Knowledge of data governance is a plus · Experience with Cloud providers, preferably AWS · Experience with PySpark, Snowflake and DBT good to have. Professional Skills: Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues
Posted 3 days ago
4.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer (AWS QuickSight, Glue, PySpark) Location: Noida Job Summary: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1. Troubleshoot and resolve data pipeline issues , optimizing performance and reliability as needed Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark for large-scale data processing and transformation Expertise in SQL and data modeling for relational and non-relational databases Experience building and optimizing ETL pipelines and data integration workflows Familiarity with business intelligence and visualization tools , especially Amazon QuickSight Knowledge of data governance, security, and compliance best practices 5. Strong programming skills in Python ; experience with automation and scripting Ability to work collaboratively in agile environments and manage multiple priorities effectively Excellent problem-solving and communication skills . Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer) Good to have skills - understanding of machine learning , deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning
Posted 3 days ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Data Analyst will be responsible to partner closely with business and S&T teams in preparing final analysis reports for the stakeholders enabling them to make important decisions based on various facts and trends and lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. This role will be interacting with DG, DPM, EA, DE, EDF, PO and D &Ai teams for historical data requirement and sourcing the data for Mosaic AI program to scale solution to new markets. Responsibilities Lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. Partners with FP&A Product Owner and associated business SME’s to understand & document business requirements and associated needs Performs the analysis of business data requirements and translates into a data design that satisfies local, sector and global requirements Using automated tools to extract data from primary and secondary sources. Using statistical tools to identify, analyse, and interpret patterns and trends in complex data sets could be helpful for the diagnosis and prediction. Working with engineers, and business teams to identify process improvement opportunities, propose system modifications. Proactively identifies impediments and looks for pragmatic and constructive solutions to mitigate risk. Be a champion for continuous improvement and drive efficiency. Preference will be given to candidate having functional understanding of financial concepts (P&L, Balance Sheet, Cash Flow, Operating Expense) and has experience modelling data & designing data flows Qualifications Bachelor of Technology from a reputed college Minimum 8-10 years of relevant work experience on data modelling / analytics, preferably Minimum 5-6year experience of navigating data in Azure Databricks, Synapse, Teradata or similar database technologies Expertise in Azure (Databricks, Data Factory, Date Lake Store Gen2) Proficient in SQL, Pyspark to analyse data for both development validation and operational support is critical Exposure to GenAI Good Communication & Presentation skill is must for this role.
Posted 3 days ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 3 days ago
10.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
12.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Description Automation Title Data Architect Type of Employment Permanent Overall Years Of Experience 12-15 years Relevant Years Of Experience 10+ Data Architect Data Architect is responsible for designing and implementing data architecture for multiple projects and also build strategies for data governance Position Summary 12 – 15 yrs of experience in a similar profile with strong service delivery background Experience as a Data Architect with a focus on Spark and Data Lake technologies. Experience in Azure Synapse Analytics Proficiency in Apache Spark for large-scale data processing. Expertise in Databricks, Delta Lake, Azure data factory, and other cloud-based data services. Strong understanding of data modeling, ETL processes, and data warehousing principles. Implement a data governance framework with Unity Catalog . Knowledge in designing scalable streaming data pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Hands on Experience in python and relevant libraries such as pyspark, numpy etc Knowledge of Machine Learning pipelines, GenAI, LLM will be plus Excellent analytical, problem-solving, and technical leadership skills. Experience in integration with business intelligence tools such as Power BI Effective communication and collaboration abilities Excellent interpersonal skills and a collaborative management style Own and delegate responsibilities effectively Ability to analyse and suggest solutions Strong command on verbal and written English language Essential Roles and Responsibilities Work as a Data Architect and able to design and implement data architecture for projects having complex data such as Big Data, Data lakes etc Work with the customers to define strategy for data architecture and data governance Guide the team to implement solutions around data engineering Proactively identify risks and communicate to stakeholders. Develop strategies to mitigate risks Build best practices to enable faster service delivery Build reusable components to reduce cost Build scalable and cost effective architecture EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 3 days ago
4.0 years
0 Lacs
Andhra Pradesh, India
On-site
Job Title: Data Engineer (4+ Years Experience) Location: Pan India Job Type: Full-Time Experience: 4+ Years Notice Period: Immediate to 30 days preferred Job Summary We are looking for a skilled and motivated Data Engineer with over 4+ years of experience in building and maintaining scalable data pipelines. The ideal candidate will have strong expertise in AWS Redshift and Python/PySpark, with exposure to AWS Glue, Lambda, and ETL tools being a plus. You will play a key role in designing robust data solutions to support analytical and operational needs across the organization. Key Responsibilities Design, develop, and optimize large-scale ETL/ELT data pipelines using PySpark or Python. Implement and manage data models and workflows in AWS Redshift. Work closely with analysts, data scientists, and stakeholders to understand data requirements and deliver reliable solutions. Perform data validation, cleansing, and transformation to ensure high data quality. Build and maintain automation scripts and jobs using Lambda and Glue (if applicable). Ingest, transform, and manage data from various sources into cloud-based data lakes (e.g., S3). Participate in data architecture and platform design discussions. Monitor pipeline performance, troubleshoot issues, and ensure data reliability. Document data workflows, processes, and infrastructure components. Required Skills 4+ years of hands-on experience as a Data Engineer. Strong proficiency in AWS Redshift including schema design, performance tuning, and SQL development. Expertise in Python and PySpark for data manipulation and pipeline development. Experience working with structured and semi-structured data (JSON, Parquet, etc.). Deep knowledge of data warehouse design principles including star/snowflake schemas and dimensional modeling. Good To Have Working knowledge of AWS Glue and building serverless ETL pipelines. Experience with AWS Lambda for lightweight processing and orchestration. Exposure to ETL tools like Informatica, Talend, or Apache Nifi. Familiarity with workflow orchestrators (e.g., Airflow, Step Functions). Knwledge of DevOps practices, version control (Git), and CI/CD pipelines. Preferred Qualifications Bachelor degree in Computer Science, Engineering, or related field. AWS certifications (e.g., AWS Certified Data Analytics, Developer Associate) are a plus.
Posted 3 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Design and develop robust ETL pipelines using Python, PySpark, and GCP services. Build and optimize data models and queries in BigQuery for analytics and reporting. Ingest, transform, and load structured and semi-structured data from various sources. Collaborate with data analysts, scientists, and business teams to understand data requirements. Ensure data quality, integrity, and security across cloud-based data platforms. Monitor and troubleshoot data workflows and performance issues. Automate data validation and transformation processes using scripting and orchestration tools. Required Skills & Qualifications Hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark. Experience in designing and implementing ETL workflows and data pipelines. Proficiency in SQL and data modeling for analytics. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer. Understanding of data governance, security, and compliance in cloud environments. Experience with version control (Git) and agile development practices.
Posted 3 days ago
12.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Description Automation Title Data Architect Type of Employment Permanent Overall Years Of Experience 12-15 years Relevant Years Of Experience 10+ Data Architect Data Architect is responsible for designing and implementing data architecture for multiple projects and also build strategies for data governance Position Summary 12 – 15 yrs of experience in a similar profile with strong service delivery background Experience as a Data Architect with a focus on Spark and Data Lake technologies. Experience in Azure Synapse Analytics Proficiency in Apache Spark for large-scale data processing. Expertise in Databricks, Delta Lake, Azure data factory, and other cloud-based data services. Strong understanding of data modeling, ETL processes, and data warehousing principles. Implement a data governance framework with Unity Catalog . Knowledge in designing scalable streaming data pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Hands on Experience in python and relevant libraries such as pyspark, numpy etc Knowledge of Machine Learning pipelines, GenAI, LLM will be plus Excellent analytical, problem-solving, and technical leadership skills. Experience in integration with business intelligence tools such as Power BI Effective communication and collaboration abilities Excellent interpersonal skills and a collaborative management style Own and delegate responsibilities effectively Ability to analyse and suggest solutions Strong command on verbal and written English language Essential Roles and Responsibilities Work as a Data Architect and able to design and implement data architecture for projects having complex data such as Big Data, Data lakes etc Work with the customers to define strategy for data architecture and data governance Guide the team to implement solutions around data engineering Proactively identify risks and communicate to stakeholders. Develop strategies to mitigate risks Build best practices to enable faster service delivery Build reusable components to reduce cost Build scalable and cost effective architecture EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do "EIIC functional excellence organization is aligned with CTO’s strategy to drive “One Eaton Engineering Functional Excellence”. The Charter of this organization is to simplify & create better work experiences for our Engineers by transforming existing engineering work processes. EIIC functional excellence organization will work with global Engineering Functional Excellence leaders in CTO’s office, Electrical, and Industrial Sector businesses. These organizations will be responsible for developing and deploying One Eaton processes across all sectors and businesses across the globe. As Senior Data Analyst and Automation engineer, you will be responsible for understanding the critical problem statements and find unique end-to-end solutions using Big Data Analytics and Automation expertise. You will also be responsible for establishing and deploying standard practices and processes for Process Automation, Big Data Analytics, Dashboards, and reporting and drive continuous improvement on these processes." "Primary Responsibility : Works with the various internal and external customers and Gathers and prioritizes customer needs and translates them into actionable requirements. Communicate insights to stakeholders, enabling data-driven decision-making across the organization Develop apps in Workshop, perform ETL process in Palantir, and develop meaningfull insights from the data. Select the appropriate programming languages, tools, and frameworks considering factors like scalability, performance, and security. Establish coding standards and best practices to ensure the code is maintainable and efficient. Organize & assemble information from diverse data sources in such a manner that the data aggregation is easily replicable and maintainable. Proficiently identify and apply the appropriate data analytics algorithm and come with recommendations based on the insights generated. Report out results in the form of various dashboards reporting measurement against targets, historical data trends and data snapshots supporting the end customers data requirements. Strategizes new uses for data and its interaction with data design. Manage multiple projects and deliver results on time and with the requisite quality Strive to get internally and externally recognized in this area by continuously learning and developing project management standard works and dashboard reporting. Knowledge of Engineering and Program management data sets including SAP or Oracle datasets will be recommended. Knowledge of SCM would be added advantage Qualifications Required: Bachelor’s Degree in Computer/Electrical Engineering with 2-5 Yrs experience. Strong understanding of organizational processes Skills " Professional experience in database management, data solution development, data transformation, and data quality assurance. Proficiency in using Palantir tools, including Code repository, Ontology manager, Object view, Workshop (dashboard, action Forms), and Data Connection. Knowledge of PowerBi, ETL process, RLS, and Dataflow would be an added advantage. Strong hands-on experience with Python and PySpark, demonstrating the ability to write, debug, and optimize code for data analysis and transformation. Competence in analyzing data and efficiently troubleshooting issues using PySpark and SQL. Familiarity with Data Ingestion, including data loading expertise with Oracle databases, SharePoint, and API calls. Comfortable working in Agile development methodologies, adapting to changing project requirements and priorities. Effective verbal and written communication skills to collaborate with team members and stakeholders. Capability to adhere to development best practices, including maintaining code standards, unit testing, integration testing, and quality assurance processes. Primary Skills Palantir tools Python/Pyspark Database Management Secondary Skills Excellent verbal and written communication and interpersonal skills Ability to work independently and within a team environment" " Process Management-Good at figuring out the processes necessary to get things done, knowing how to organize people and activities, knowing what to measure and how to measure it, Can simplify complex processes, Gets more out of fewer resources Problem Solving - Uses rigorous logic and methods to solve difficult problems with effective solutions, probes all fruitful sources for answers, can see hidden problems, Is excellent at honest analysis Looks beyond the obvious, and doesn't stop at the first answers Decision quality – makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment Drive for results – Critical thinking: Critical thinking is the ability to analyze a situation and make a decision based on the information you have. As an automation engineer, you may be required to make decisions about how to best implement automation processes. Having strong critical thinking skills can help you make the best decision for your company. Critical thinking: Critical thinking is the ability to analyze a situation and make a decision based on the information you have. As an automation engineer, you may be required to make decisions about how to best implement automation processes. Having strong critical thinking skills can help you make the best decision for your company Communication: Communication is an essential skill for automation engineers, as they often work with other engineers and other professionals in other departments. Effective communication can help you collaborate with others, share ideas and explain technical concepts. can be counted on to exceed goals successfully Interpersonal savvy – relates well to all kinds of people; builds appropriate rapport."
Posted 3 days ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Job Description Job Description Join a dynamic and diverse global team dedicated to developing innovative solutions that uncover the complete consumer journey for our clients. We are seeking a highly skilled Data Scientist with strong development skills in programming languages such as Python. Additionally, expertise in statistics, mathematics, econometrics, and experience with panel data to revolutionize the way we measure consumer behavior both online and in-store. Looking ahead, we are excited to find someone who will join our team in developing a tool that can simulate the impact of production process changes on client data. This tool outside of the production factory will allow the wider Data Science team to drive innovation with unpresented efficiency. About The Role Collaborative Environment: Work with an international team in a flexible and supportive setting, fostering cross-functional collaboration between data scientists, engineers, and product stakeholders Tool Ownership and Development: Take ownership of a core Python-based tool, ensuring its continued development, scalability, and maintainability. Use robust engineering practices such as version control, testing and PRs Innovative Solution Development: Collaborate closely with subject matter experts to understand complex methodologies. Translate these into scalable, production-ready implementations within the Python tool. Design and implement new features and enhancements to the tool to address evolving market challenges and improve team efficiency Methodology Enhancement: Evaluate and improve current methodologies, including data cleaning, preparation, quality tracking, and consumer projection, with a strong focus on automation and reproducibility Documentation & Code Quality: Maintain comprehensive documentation of the tool’s architecture, usage, and development roadmap. Ensure high code quality through peer reviews and adherence to best practices Research and Analysis: Conduct rigorous research and analysis to inform tool improvements and ensure alignment with business needs. Communicate findings and recommendations clearly to both technical and non-technical audiences Deployment and Support: Support the production deployment of new features and enhancements. Monitor tool performance and address issues proactively to ensure reliability and user satisfaction Cross-Team Coordination: Coordinate efforts across multiple teams and stakeholders to ensure seamless integration of the tool into broader workflows and systems Qualifications About You Ideally you possess a good understanding of consumer behavior, panel-based projections, and consumer metrics and analytics. You have successfully designed and developed software applying statistical and data analytical methods and demonstrated your ability to handle complex data sets. Experience with (un)managed crowdsourced panels and receipt capture methodologies is an advantage. Educational Background: Bachelor’s or Master’s Degree in Computer Science, Software Engineering, Mathematics, Statistics, Socioeconomics, Data Science, or a related field with a minimum of 3 years of relevant experience Programming Proficiency: Proficient with Python or another programming language, R, C++ or JAVA, with a willingness to learn Python Software Engineering Skills: Strong software engineering skills, including experience designing and developing software; optionally, experience with version control systems GitHub or Bitbucket Data Analysis Skills: Proficiency in manipulating, analyzing, and interpreting large data sets Data Handling: Experience using Spark, specifically with PySpark package, experience working with large-scale datasets. Optionally, experience in SQL and working with queries Continuous Learning: Eagerness to adopt and develop evolving technologies and tools Statistical Expertise: Statistical and logical skills, experience in data cleaning, and data aggregation techniques Communication and Collaboration: Strong communication, writing, and collaboration skills Nice to Have Consumer Insights: Knowledge of consumer behavior and (un)managed consumer-related crowdsourced panels Technology Skills: Familiarity with technology stacks for cloud computing (AzureAI, , Databricks, Snowflake) Production Support:Experience or interest in supporting technology teams in production deployment Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 3 days ago
0 years
0 Lacs
India
On-site
Job Description: We are seeking a highly skilled 4+ Azure Data Engineer to design, develop, and optimize data pipelines and data integration solutions in a cloud-based environment. The ideal candidate will have strong technical expertise in Azure, Data Engineering tools, and advanced ETL design along with excellent communication and problem-solving skills. Key Responsibilities: Design and develop advanced ETL pipelines for data ingestion and egress for batch data. Build scalable data solutions using Azure Data Factory (ADF) , Databricks , Spark (PySpark & Scala Spark) , and other Azure services. Troubleshoot data jobs, identify issues, and implement effective root cause solutions. Collaborate with stakeholders to gather requirements and propose efficient solution designs. Ensure data quality, reliability, and adherence to best practices in data engineering. Maintain detailed documentation of problem definitions, solutions, and architecture. Work independently with minimal supervision while ensuring project deadlines are met. Required Skills & Qualifications: Microsoft Certified: Azure Fundamentals (preferred). Microsoft Certified: Azure Data Engineer Associate (preferred). Proficiency in SQL , Python , and Scala . Strong knowledge of Azure Cloud services , ADF , and Databricks . Hands-on experience with Apache Spark (PySpark & Scala Spark). Expertise in designing and implementing complex ETL pipelines for batch data. Strong troubleshooting skills with the ability to perform root cause analysis. Soft Skills: Excellent verbal and written communication skills. Strong documentation skills for drafting problem definitions and solutions. Ability to effectively gather requirements and propose solution designs. Self-driven with the ability to work independently with minimal supervision.
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are seeking a skilled and experienced Azure Databricks Engineer to join our growing data engineering team. The ideal candidate will have deep hands-on expertise in building scalable data pipelines and streaming architectures using Azure-native technologies. Prior experience in the banking or financial services domain is highly desirable, as you will be working with critical data assets and supporting regulatory, risk, and operational reporting use cases. Key Responsibilities: Design, develop, and optimize data pipelines using Databricks (PySpark) for batch and real-time data processing. Implement CDC (Change Data Capture) and Delta Live Tables/Autoloader to support near-real-time ingestion. Integrate various structured and semi-structured data sources using ADF, ADLS, and Kafka (Confluent). Develop CI/CD pipelines for data engineering workflows using GitHub Actions or Azure DevOps. Write efficient and reusable SQL and Python code for data transformations and validations. Ensure data quality, lineage, governance, and security across all ingestion and transformation layers. Collaborate closely with business analysts, data scientists, and data stewards to support use cases in risk, finance, compliance, and operations. Participate in code reviews, architectural discussions, and documentation efforts. Required Skills & Qualifications: Strong proficiency in SQL, Python, and PySpark. Proven experience with Azure Databricks, including notebooks, jobs, clusters, and Delta Lake. Experience with Azure Data Lake Storage (ADLS Gen2) and Azure Data Factory (ADF). Hands-on with Confluent Kafka for streaming data integration. Strong understanding of Autoloader, CDC mechanisms, and Delta Lake-based architecture. Experience implementing CI/CD pipelines using GitHub and/or Azure DevOps. Knowledge of data modeling, data warehousing, and data security best practices. Exposure to regulatory and risk data use cases in the banking/financial sector is a strong plus. Preferred Qualifications: Azure certifications (e.g., Azure Data Engineer Associate). Experience with tools such as Delta Live Tables, Unity Catalog, and Lakehouse architecture. Familiarity with business glossaries, data lineage tools, and data governance frameworks. Understanding of financial data including GL, loan, customer, transaction, or market risk domains.
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer - Senior Location: Noida Employment Type: Permanent Experience Required: Minimum 5 years Primary Skills: Cloud - AWS (AWS Lambda, AWS EventBridge, AWS Fargate) --- Job Description We are seeking a highly skilled Senior Data Engineer to design, implement, and maintain scalable data pipelines that support machine learning model training and inference. Responsibilities: Build and maintain large-scale data pipelines ensuring scalability, reliability, and efficiency. Collaborate with data scientists to streamline the deployment and management of machine learning models. Design and optimize ETL (Extract, Transform, Load) processes and integrate data from multiple sources into structured storage systems. Automate ML workflows using MLOps tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended - TFX). Monitor model performance, data lineage, and system health in production environments. Work cross-functionally to improve data architecture and enable seamless ML model integration. Manage and optimize cloud platforms and data storage solutions (AWS, GCP, Azure). Ensure data security, integrity, and compliance with governance policies. Troubleshoot and optimize pipelines to improve reliability and performance. --- Required Skills Languages: Python, SQL, PySpark Cloud: AWS Services (Lambda, EventBridge, Fargate), Cloud Platforms (AWS, GCP, Azure) DevOps: Docker, Kubernetes, Containerization ETL Tools: AWS Glue, SQL Server (SSIS, SQL Packages) Nice to Have: Redshift, SAS dataset knowledge --- Mandatory Competencies DevOps/Configuration Management: Docker DevOps/Configuration Management: Cloud Platforms - AWS DevOps/Configuration Management: Containerization (Docker, Kubernetes) ETL: AWS Glue Database: SQL Server - SQL Packages
Posted 3 days ago
3.0 - 6.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector FS X-Sector Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Job Description & Summary – Senior Associate – Azure Data Engineer Role : Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. Solid understanding of Azure Functions and their application in data processing workflows. Understanding of DevOps practices and CI/CD pipelines for data solutions. Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. Collaborate with data architects and business stakeholders to translate requirements into technical solutions. Implement and manage data integration processes using SQL Server and Python. Design and deploy Azure Functions to support data processing workflows. Monitor and troubleshoot data pipeline performance and reliability issues. Ensure data quality, security, and compliance with industry standards and best practices. Document technical specifications and maintain clear and concise project documentation. Mandatory Skill Sets Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years Of Experience Required 3-6 Years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills ETL Tools, Microsoft Azure, PySpark Optional Skills Python (Programming Language) Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities 3+ years of experience in implementing analytical solutions using Palantir Foundry. preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services: Data Engineering with Contour and Fusion Dashboarding, and report development using Quiver (or Reports) Application development using Workshop. Exposure to Map and Vertex is a plus Palantir AIP experience will be a plus Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. Experience in MLOps is a plus. Experience in developing and managing scalable architecture & working experience in managing large data sets. Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. Experience in developing GenAI application is a plus Mandatory Skill Sets At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. At least 3 years of experience with Foundry services Preferred Skill Sets Palantir Foundry Years Of Experience Required Experience 4 to 7 years ( 3 + years relevant) Education Qualification Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Senior Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities We are seeking a highly skilled and experienced Python developer with 6-7 years of hands-on experience in software development. Key Responsibilities: - Design, develop, test and maintain robust and scalable backend applications using FastAPI deliver high- performance APIs. - Write reusable efficient code following best practices - Collaborate with cross-functional teams, integrate user-facing elements with server-side logic - Architect and implement distributed, scalable microservices leveraging Temporal workflows for orchestrating complex processes. - Participate in code reviews and mentor junior developers - Debug and resolve technical issues and production incidents - Follow agile methodologies and contribute to sprint planning and estimations - Strong communication and collaboration skills - Relevant certifications are a plus Required Skills: - Strong proficiency in Python 3.x. - Collaborate closely with DevOps to implement CI/CD pipelines for Python projects, ensuring smooth deployment to production environments. Integrate with various databases (e.g., Cosmos DB,) and message queues (e.g., Kafka, eventhub) for seamless backend operations. - Experience in one or more Python frameworks (Django, Flask, FastAPI) - Develop and maintain unit and integration tests using frameworks like pytest and unittest to ensure code quality and reliability. - Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Python services. - Familiarity with asynchronous programming (e.g., asyncio, aiohttp) and event-driven architectures. - Strong skill in PySpark for large-scale data processing - Solid understanding of Object-Oriented Programming and design principles - Proficient in using version control systems like Git Mandatory Skill Sets Python Developer Preferred Skill Sets Experience with Docker, Kubernetes, and cloud environments (AWS, GCP, or Azure) for deploying and managing Years Of Experience Required 4-7 Years Education Qualification B.Tech/B.E./MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Analytical Thinking, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Creativity {+ 46 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 3 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are seeking a Data Engineer to design, develop, and maintain data ingestion processes to a data platform built using Microsoft Technologies, ensuring data quality and integrity. The role involves collaborating with data architects and business analysts to implement solutions using tools like ADF, Azure Databricks, and requires strong SQL skills. Responsibilities Key responsibilities include developing, testing, and optimizing ETL workflows and maintaining documentation. ETL development experience in Microsoft data track are required. Work with business team to translate the business requirement to technical requirements. Demonstrated expertise in Agile methodologies, including Scrum, Kanban, or SAFe. Mandatory Skill Sets Strong proficiency in Azure Databricks, including Spark and Delta Lake. Experience with Azure Data Factory, Azure Data Lake Storage, and Azure SQL Database. Proficiency in data integration and ETL processes and T-SQL. Experienced working in Python for data engineering Experienced working in Postgres Database Experienced working in graph database Experienced in architecture design and data modelling Good To Have Skill Sets: Unity Catalog / Purview Familiarity with Fabric/Snowflake service offerings Visualization tool – PowerBI Preferred Skill Sets Hands on knowledge of python, Pyspark and strong SQL knowledge. ETL and data warehousing is must. Relevant certifications (Any one) (e.g., Databricks Data Engineer Associate Microsoft Certified: Azure Data Engineer Associate Azure Solution Architect) are mandatory Years of experience required: 5+yrs Education qualification: Bachelor's degree in Computer Science, IT, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Acceptance Test Driven Development (ATDD), Acceptance Test Driven Development (ATDD), Accepting Feedback, Active Listening, Android, API Management, Appian (Platform), Application Development, Application Frameworks, Application Lifecycle Management, Application Software, Business Process Improvement, Business Process Management (BPM), Business Requirements Analysis, C#.NET, C++ Programming Language, Client Management, Code Review, Coding Standards, Communication, Computer Engineering, Computer Science, Continuous Integration/Continuous Delivery (CI/CD), Debugging, Emotional Regulation {+ 41 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Senior Associate Exp : 3 - 6 Years Location: Kolkata Technical Skills: Strong expertise in Azure Databricks, Azure Data Factory (ADF), PySpark, SQL Server, and Python. Solid understanding of Azure Functions and their application in data processing workflows. Understanding of DevOps practices and CI/CD pipelines for data solutions. Experience with other ETL tools such as Informatica Intelligent Cloud Services (IICS) is a plus. Strong problem-solving skills and ability to work independently and collaboratively in a fast-paced environment. Excellent communication skills to effectively convey technical concepts to non-technical stakeholders. Key Responsibilities: Develop, maintain, and optimize scalable data pipelines using Azure Databricks, Azure Data Factory (ADF), and PySpark. Collaborate with data architects and business stakeholders to translate requirements into technical solutions. Implement and manage data integration processes using SQL Server and Python. Design and deploy Azure Functions to support data processing workflows. Monitor and troubleshoot data pipeline performance and reliability issues. Ensure data quality, security, and compliance with industry standards and best practices. Document technical specifications and maintain clear and concise project documentation. Mandatory Skill Sets Azure Databricks, Azure Data Factory (ADF), and PySpark. Preferred skill sets: Azure Databricks, Azure Data Factory (ADF), and PySpark. Years Of Experience Required 3-6 Years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 3 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Job Description: Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Mandatory Skill Sets Databricks Preferred Skill Sets Databricks Years Of Experience Required 7-14 years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Bachelor of Technology, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Databricks Platform Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date August 11, 2025
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough