Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Summary: This role is accountable for running day-to-day operations of the Data Platform in Azure / AWS Databricks.The role involves designing and implementing data ingestion pipelines from multiple sources using Azure Databricks, ensuring seamless and efficient pipeline executions, and adhering to security, regulatory, and audit control guidelines. Key Responsibilities: ● Design and implement data ingestion pipelines from multiple sources using Azure Databricks. ● Ensure data pipelines run smoothly and efficiently with minimal downtime. ● Develop scalable and reusable frameworks for ingesting large and complex datasets. ● Integrate end-to-end data pipelines, ensuring quality and consistency from source systems to target repositories. ● Work with event-based and streaming technologies to ingest and process data in real-time. ● Collaborate with other project team members to deliver additional components such as API interfaces and search functionalities. ● Evaluate the performance and applicability of various tools against customer requirements and provide recommendations. ● Provide technical advice to the team and assist in issue resolution, leveraging strong Cloud and Databricks knowledge. ● Provide on-call, after-hours, and weekend support as needed to maintain platform stability. ● Fulfil service requests related to the Data Analytics platform efficiently. ● Lead and drive optimisation and continuous improvement initiatives within the team. ● Conduct technical reviews of changes as part of release management, acting as a gatekeeper for production deployments. ● Adhere to data security standards and implement required controls within the platform. ● Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform. ● Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions. ● Own architectural decisions to ensure alignment with data governance, security, and compliance requirements. ● Mentor and guide a team of data engineers, providing technical leadership and supporting career development. ● Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Qualifications and Experience: ● Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics, or equivalent ● Minimum of 7+ years of experience in the data analytics field. ● Proven experience with Azure/AWS Databricks in building and optimising data pipelines, ● architectures, and datasets. ● Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. ● Ability to troubleshoot and optimize complex queries on the Spark platform. ● Knowledge of structured and unstructured data design, modelling, access, and storage techniques. ● Experience designing and deploying data applications on cloud platforms such as Azure or AWS. ● Hands-on experience in performance tuning and optimising code running in Databricks environment ● Strong analytical and problem-solving skills, particularly within Big Data environments. ● Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: ● Excellent communication skills with the ability to interact directly with customers. ● Azure/AWS Databricks. ● Python / Scala / Spark / PySpark. ● Strong SQL and RDBMS expertise. ● HIVE / HBase / Impala / Parquet. ● Sqoop, Kafka, Flume. ● Airflow. ● Jenkins or Bamboo. ● Github or Bitbucket. ● Nexus. Good to Have: ● Relevant accredited certifications for Azure, AWS, Cloud Engineering, and/or Databricks. ● Knowledge of Delta Live Tables (DLT).
Posted 2 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer – Senior (Full Stack Backend – Java) Location: Chennai (Onsite) Employment Type: Contract Budget: Up to ₹22 LPA 34347 Assessment: Full Stack Backend – Java (via HackerRank or equivalent platform) Notice Period: Immediate Joiners Preferred Role Overview We are seeking a highly skilled Senior Software Engineer with expertise in backend development, microservices architecture, and cloud-native technologies. The selected candidate will be part of a collaborative product team responsible for developing and deploying REST APIs and microservices for digital platforms. The role involves working in a fast-paced agile environment, contributing to both engineering excellence and product innovation. Key Responsibilities Design, develop, test, and deploy high-quality, scalable backend systems and APIs. Collaborate with cross-functional teams including product managers, designers, and QA engineers to deliver customer-centric solutions. Write clean, maintainable, and well-documented code following industry best practices. Participate in pair programming, code reviews, and test-driven development. Contribute to defining architecture and service-level objectives. Conduct proof-of-concepts for new capabilities and features. Drive continuous improvement in code quality, testing, and deployment processes. Required Skills 7+ years of hands-on experience in software engineering with a focus on backend development or full-stack engineering. Strong expertise in Java and microservices architecture. Solid understanding and working knowledge of: Google Cloud Platform (GCP) services including BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL, and Airflow. Infrastructure as Code (IaC) tools like Terraform. CI/CD tools such as Tekton. Databases: PostgreSQL, Cloud SQL. Programming/scripting: Python, PySpark. Building and consuming RESTful APIs. Preferred Qualifications Experience with containerization and orchestration tools. Familiarity with monitoring tools and service-level indicators (SLIs/SLAs). Exposure to agile frameworks like Extreme Programming (XP), Scrum, or Kanban. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical discipline. Skills: restful apis,pyspark,tekton,data fusion,bigquery,cloud sql,microservices architecture,microservices,software,terraform,postgresql,dataflow,code,cloud,dataproc,google cloud platform (gcp),ci/cd,airflow,python,java
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled and motivated Python , AWS, Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 2 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
6.0 years
6 - 9 Lacs
Hyderābād
On-site
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform: The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities will include: Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You will have: 6+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing
Posted 2 days ago
5.0 years
3 - 7 Lacs
Hyderābād
On-site
Company Profile : LSEG (London Stock Exchange Group) is a world-leading financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to perfection in delivering services across Data & Analytics, Capital Markets, and Post Trade. Backed by three hundred years of experience, innovative technologies, and a team of over 23,000 people in 70 countries, our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Working in partnership with Tata Consultancy Services (TCS), we are excited to expand our tech centres of perfection in India, by building a new global center, right here in the heart of Hyderabad. Role Profile : As a Sr AWS Developer, you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment, and support of application developed for our clients. As a member working in a team environment, you will work with solution architects and developers on interpretation/translation of wireframes and creative designs into functional requirements, and subsequently into technical design. Key Responsibilities : 5+ years experience in design, build, and maintain robust, scalable and efficient ETL pipelines using Python and Spark. Develop workflows demonstrating AWS services such as Glue, Glue Data Catalog, Lambda, S3 and EMR Serverless, API Gateway. Implement data quality frameworks and governance practices to ensure reliable data processing. Optimize existing workflows and drive transformation of the data from multiple sources. Monitor system performance and ensure data reliability through proactive optimizations. Chip in to technical discussions, and deliver high-quality solutions. Essential/ Must-Have Skills : Hands-on experience with AWS Glue, Glue Data Catalog, Lambda, S3, and EMR, EMR Serverless, API Gateway, SNS, SQS, CloudWatch, CloudFormation and CloudFront. Strong understanding of data quality frameworks, governance practices and scalable architectures. Practical knowledge of integration of different data sources and transformation of it. Agile methodology experience, including sprint planning and retrospectives. Excellent interpersonal skills for articulating technical solutions to diverse team members. Experience in additional programming languages such as Java, Node. JS. Experience with Java, Terraform, Ansible, Python, pyspark etc. Knowledge of Tools such as Kafka, Datadog, GitLab, Jenkins, Docker, Kubernetes. Desirable Skills : Nice to have AWS Certified Developer or AWS Certified Solutions Architect. Experience with serverless computing paradigms. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 2 days ago
3.0 years
6 - 8 Lacs
Hyderābād
Remote
Accellor is looking for a Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required. Design, develop, and maintain ETL pipelines using PySpark Notebooks and Microsoft Fabric. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Requirements Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a must. Min 3 years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Strong knowledge of data warehousing concepts, ETL frameworks , and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Proven ability to work independently, as part of a team, and in leadership roles. Strong communication skills with the ability to translate complex technical concepts into business terms. Mandatory skills Experience with Data lake, Data warehouse, Delta lake Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Knowledge of scripting languages (e.g. , Python, Scala) for data manipulation and automation. Familiarity with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus. Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.
Posted 2 days ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 days ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : Custom Software Engineer Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Custom Software Engineer, you will develop custom software solutions to design, code, and enhance components across systems or applications. Your typical day will involve collaborating with cross-functional teams to understand business requirements, utilizing modern frameworks and agile practices to deliver scalable and high-performing solutions tailored to specific business needs. You will engage in problem-solving activities, ensuring that the software solutions meet the highest standards of quality and performance while adapting to evolving project requirements. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve software development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with modern software development methodologies, particularly Agile. - Familiarity with cloud platforms and services for deploying applications. - Ability to troubleshoot and optimize performance in software applications. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required. 15 years full time education
Posted 2 days ago
15.0 years
0 Lacs
Indore
On-site
Project Role : Custom Software Engineer Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Custom Software Engineer, you will develop custom software solutions to design, code, and enhance components across systems or applications. Your typical day will involve collaborating with cross-functional teams to understand business requirements, utilizing modern frameworks and agile practices to deliver scalable and high-performing solutions tailored to specific business needs. You will engage in problem-solving activities, ensuring that the software solutions meet the highest standards of quality and performance while adapting to evolving project requirements. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve software development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with modern software development methodologies, particularly Agile. - Familiarity with cloud platforms and services for deploying applications. - Ability to troubleshoot and optimize performance in software applications. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required. 15 years full time education
Posted 2 days ago
7.0 - 10.0 years
2 - 9 Lacs
Noida
On-site
Posted On: 30 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We need a Sr. Databricks Dev – 7 to 10 years Core skills : Databricks – Level: Advanced SQL (MSSQL Server) – Joins, SQ optimization, basic knowledge of StoredProcedure, Functions PySpark – Level: Advanced Azure Delta lake Python – Basic Mandatory Competencies Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 2 days ago
15.0 years
0 Lacs
Calcutta
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Apache Spark Good to have skills : Java, Scala, PySpark Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the data architecture. You will be involved in analyzing requirements, proposing solutions, and ensuring that the data platform aligns with organizational goals and standards. Your role will require you to stay updated with industry trends and best practices to contribute effectively to the team. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Engage in continuous learning to stay abreast of emerging technologies and methodologies. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with Java, Scala, PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with data integration tools and techniques. - Familiarity with cloud platforms and services related to data engineering. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Kolkata office. - A 15 years full time education is required. 15 years full time education
Posted 2 days ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Mandatory : Proficiency in Python with experience in Databricks (PySpark) Good to Have : Hands-on experience with Apache Airflow. Working knowledge of PostgreSQL, MongoDB. Basic experience on cloud technologies like Azure, AWS and Google.
Posted 2 days ago
5.0 years
0 Lacs
Andhra Pradesh
On-site
ETL Lead looking for a Senior ETL Developer in our Enterprise Data Warehouse. In this role you will be part of a team working to develop solutions enabling the business to leverage data as an asset at the bank.The Senior ETL Developer should have extensive knowledge of data warehousing cloud technologies. If you consider data as a strategic asset, evangelize the value of good data and insights, have a passion for learning and continuous improvement, this role is for you. Key Responsibilities Translate requirements and data mapping documents to a technical design. Develop, enhance and maintain code following best practices and standards. Create and execute unit test plans. Support regression and system testing efforts. Debug and problem solve issues found during testing and/or production. Communicate status, issues and blockers with project team. Support continuous improvement by identifying and solving opportunities. Basic Qualifications Bachelor degree or military experience in related field (preferably computer science). At least 5 years of experience in ETL development within a Data Warehouse. Deep understanding of enterprise data warehousing best practices and standards. Strong experience in software engineering comprising of designing, developing and operating robust and highly-scalable cloud infrastructure services. Strong experience with Python/PySpark, DataStage ETL and SQL development. Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake. Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies. Understand Authentication Authorization Services, Identity & Access Management. Strong communication and interpersonal skills. Strong organization skills and the ability to work independently as well as with a team. Preferred Qualifications AWS Certified Solutions Architect Associate, AWS Certified DevOps Engineer Professional and/or AWS Certified Solutions Architect Professional Experience defining future state roadmaps for data warehouse applications. Experience leading teams of developers within a project. Experience in financial services (banking) industry. Mandatory Skills ETL - Datawarehouse concepts Snowflake CI/CD Tools (Jenkins, GitHub) python Datastage About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Min Experience:4.0 years Max Experience:8.0 years Skills:Kubenetes,Pyspark, docker, Gitlab,dbt, python,Reliability, Angular 2,Grafana,AWS,Monitoring and Observability Location:PuneJob description: Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” - long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills: · 4-8 years of overall experience · Strong programming experience with Python and ability to write modular code following best practices in python which is backed by unit tests with high degree of coverage. · Knowledge of source control(Git/Gitlabs) · Understanding of deployment patterns along with knowledge of CI/CD and build tools · Knowledge of Kubernetes concepts and commands is a must · Knowledge of monitoring and alerting tools like Grafana, Open telemetry is a must · Knowledge of Astro/Airflow is plus · Knowledge of data governance is a plus · Experience with Cloud providers, preferably AWS · Experience with PySpark, Snowflake and DBT good to have. Professional Skills: Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer (AWS QuickSight, Glue, PySpark) Location: Noida Job Summary: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1. Troubleshoot and resolve data pipeline issues , optimizing performance and reliability as needed Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark for large-scale data processing and transformation Expertise in SQL and data modeling for relational and non-relational databases Experience building and optimizing ETL pipelines and data integration workflows Familiarity with business intelligence and visualization tools , especially Amazon QuickSight Knowledge of data governance, security, and compliance best practices 5. Strong programming skills in Python ; experience with automation and scripting Ability to work collaboratively in agile environments and manage multiple priorities effectively Excellent problem-solving and communication skills . Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer) Good to have skills - understanding of machine learning , deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning
Posted 2 days ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Data Analyst will be responsible to partner closely with business and S&T teams in preparing final analysis reports for the stakeholders enabling them to make important decisions based on various facts and trends and lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. This role will be interacting with DG, DPM, EA, DE, EDF, PO and D &Ai teams for historical data requirement and sourcing the data for Mosaic AI program to scale solution to new markets. Responsibilities Lead data requirement, source analysis, data analysis, data transformation and reconciliation activities. Partners with FP&A Product Owner and associated business SME’s to understand & document business requirements and associated needs Performs the analysis of business data requirements and translates into a data design that satisfies local, sector and global requirements Using automated tools to extract data from primary and secondary sources. Using statistical tools to identify, analyse, and interpret patterns and trends in complex data sets could be helpful for the diagnosis and prediction. Working with engineers, and business teams to identify process improvement opportunities, propose system modifications. Proactively identifies impediments and looks for pragmatic and constructive solutions to mitigate risk. Be a champion for continuous improvement and drive efficiency. Preference will be given to candidate having functional understanding of financial concepts (P&L, Balance Sheet, Cash Flow, Operating Expense) and has experience modelling data & designing data flows Qualifications Bachelor of Technology from a reputed college Minimum 8-10 years of relevant work experience on data modelling / analytics, preferably Minimum 5-6year experience of navigating data in Azure Databricks, Synapse, Teradata or similar database technologies Expertise in Azure (Databricks, Data Factory, Date Lake Store Gen2) Proficient in SQL, Pyspark to analyse data for both development validation and operational support is critical Exposure to GenAI Good Communication & Presentation skill is must for this role.
Posted 2 days ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage
Posted 2 days ago
10.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
12.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Description Automation Title Data Architect Type of Employment Permanent Overall Years Of Experience 12-15 years Relevant Years Of Experience 10+ Data Architect Data Architect is responsible for designing and implementing data architecture for multiple projects and also build strategies for data governance Position Summary 12 – 15 yrs of experience in a similar profile with strong service delivery background Experience as a Data Architect with a focus on Spark and Data Lake technologies. Experience in Azure Synapse Analytics Proficiency in Apache Spark for large-scale data processing. Expertise in Databricks, Delta Lake, Azure data factory, and other cloud-based data services. Strong understanding of data modeling, ETL processes, and data warehousing principles. Implement a data governance framework with Unity Catalog . Knowledge in designing scalable streaming data pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Hands on Experience in python and relevant libraries such as pyspark, numpy etc Knowledge of Machine Learning pipelines, GenAI, LLM will be plus Excellent analytical, problem-solving, and technical leadership skills. Experience in integration with business intelligence tools such as Power BI Effective communication and collaboration abilities Excellent interpersonal skills and a collaborative management style Own and delegate responsibilities effectively Ability to analyse and suggest solutions Strong command on verbal and written English language Essential Roles and Responsibilities Work as a Data Architect and able to design and implement data architecture for projects having complex data such as Big Data, Data lakes etc Work with the customers to define strategy for data architecture and data governance Guide the team to implement solutions around data engineering Proactively identify risks and communicate to stakeholders. Develop strategies to mitigate risks Build best practices to enable faster service delivery Build reusable components to reduce cost Build scalable and cost effective architecture EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 days ago
4.0 years
0 Lacs
Andhra Pradesh, India
On-site
Job Title: Data Engineer (4+ Years Experience) Location: Pan India Job Type: Full-Time Experience: 4+ Years Notice Period: Immediate to 30 days preferred Job Summary We are looking for a skilled and motivated Data Engineer with over 4+ years of experience in building and maintaining scalable data pipelines. The ideal candidate will have strong expertise in AWS Redshift and Python/PySpark, with exposure to AWS Glue, Lambda, and ETL tools being a plus. You will play a key role in designing robust data solutions to support analytical and operational needs across the organization. Key Responsibilities Design, develop, and optimize large-scale ETL/ELT data pipelines using PySpark or Python. Implement and manage data models and workflows in AWS Redshift. Work closely with analysts, data scientists, and stakeholders to understand data requirements and deliver reliable solutions. Perform data validation, cleansing, and transformation to ensure high data quality. Build and maintain automation scripts and jobs using Lambda and Glue (if applicable). Ingest, transform, and manage data from various sources into cloud-based data lakes (e.g., S3). Participate in data architecture and platform design discussions. Monitor pipeline performance, troubleshoot issues, and ensure data reliability. Document data workflows, processes, and infrastructure components. Required Skills 4+ years of hands-on experience as a Data Engineer. Strong proficiency in AWS Redshift including schema design, performance tuning, and SQL development. Expertise in Python and PySpark for data manipulation and pipeline development. Experience working with structured and semi-structured data (JSON, Parquet, etc.). Deep knowledge of data warehouse design principles including star/snowflake schemas and dimensional modeling. Good To Have Working knowledge of AWS Glue and building serverless ETL pipelines. Experience with AWS Lambda for lightweight processing and orchestration. Exposure to ETL tools like Informatica, Talend, or Apache Nifi. Familiarity with workflow orchestrators (e.g., Airflow, Step Functions). Knwledge of DevOps practices, version control (Git), and CI/CD pipelines. Preferred Qualifications Bachelor degree in Computer Science, Engineering, or related field. AWS certifications (e.g., AWS Certified Data Analytics, Developer Associate) are a plus.
Posted 2 days ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Design and develop robust ETL pipelines using Python, PySpark, and GCP services. Build and optimize data models and queries in BigQuery for analytics and reporting. Ingest, transform, and load structured and semi-structured data from various sources. Collaborate with data analysts, scientists, and business teams to understand data requirements. Ensure data quality, integrity, and security across cloud-based data platforms. Monitor and troubleshoot data workflows and performance issues. Automate data validation and transformation processes using scripting and orchestration tools. Required Skills & Qualifications Hands-on experience with Google Cloud Platform (GCP), especially BigQuery. Strong programming skills in Python and/or PySpark. Experience in designing and implementing ETL workflows and data pipelines. Proficiency in SQL and data modeling for analytics. Familiarity with GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Composer. Understanding of data governance, security, and compliance in cloud environments. Experience with version control (Git) and agile development practices.
Posted 2 days ago
12.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Description Automation Title Data Architect Type of Employment Permanent Overall Years Of Experience 12-15 years Relevant Years Of Experience 10+ Data Architect Data Architect is responsible for designing and implementing data architecture for multiple projects and also build strategies for data governance Position Summary 12 – 15 yrs of experience in a similar profile with strong service delivery background Experience as a Data Architect with a focus on Spark and Data Lake technologies. Experience in Azure Synapse Analytics Proficiency in Apache Spark for large-scale data processing. Expertise in Databricks, Delta Lake, Azure data factory, and other cloud-based data services. Strong understanding of data modeling, ETL processes, and data warehousing principles. Implement a data governance framework with Unity Catalog . Knowledge in designing scalable streaming data pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Hands on Experience in python and relevant libraries such as pyspark, numpy etc Knowledge of Machine Learning pipelines, GenAI, LLM will be plus Excellent analytical, problem-solving, and technical leadership skills. Experience in integration with business intelligence tools such as Power BI Effective communication and collaboration abilities Excellent interpersonal skills and a collaborative management style Own and delegate responsibilities effectively Ability to analyse and suggest solutions Strong command on verbal and written English language Essential Roles and Responsibilities Work as a Data Architect and able to design and implement data architecture for projects having complex data such as Big Data, Data lakes etc Work with the customers to define strategy for data architecture and data governance Guide the team to implement solutions around data engineering Proactively identify risks and communicate to stakeholders. Develop strategies to mitigate risks Build best practices to enable faster service delivery Build reusable components to reduce cost Build scalable and cost effective architecture EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do "EIIC functional excellence organization is aligned with CTO’s strategy to drive “One Eaton Engineering Functional Excellence”. The Charter of this organization is to simplify & create better work experiences for our Engineers by transforming existing engineering work processes. EIIC functional excellence organization will work with global Engineering Functional Excellence leaders in CTO’s office, Electrical, and Industrial Sector businesses. These organizations will be responsible for developing and deploying One Eaton processes across all sectors and businesses across the globe. As Senior Data Analyst and Automation engineer, you will be responsible for understanding the critical problem statements and find unique end-to-end solutions using Big Data Analytics and Automation expertise. You will also be responsible for establishing and deploying standard practices and processes for Process Automation, Big Data Analytics, Dashboards, and reporting and drive continuous improvement on these processes." "Primary Responsibility : Works with the various internal and external customers and Gathers and prioritizes customer needs and translates them into actionable requirements. Communicate insights to stakeholders, enabling data-driven decision-making across the organization Develop apps in Workshop, perform ETL process in Palantir, and develop meaningfull insights from the data. Select the appropriate programming languages, tools, and frameworks considering factors like scalability, performance, and security. Establish coding standards and best practices to ensure the code is maintainable and efficient. Organize & assemble information from diverse data sources in such a manner that the data aggregation is easily replicable and maintainable. Proficiently identify and apply the appropriate data analytics algorithm and come with recommendations based on the insights generated. Report out results in the form of various dashboards reporting measurement against targets, historical data trends and data snapshots supporting the end customers data requirements. Strategizes new uses for data and its interaction with data design. Manage multiple projects and deliver results on time and with the requisite quality Strive to get internally and externally recognized in this area by continuously learning and developing project management standard works and dashboard reporting. Knowledge of Engineering and Program management data sets including SAP or Oracle datasets will be recommended. Knowledge of SCM would be added advantage Qualifications Required: Bachelor’s Degree in Computer/Electrical Engineering with 2-5 Yrs experience. Strong understanding of organizational processes Skills " Professional experience in database management, data solution development, data transformation, and data quality assurance. Proficiency in using Palantir tools, including Code repository, Ontology manager, Object view, Workshop (dashboard, action Forms), and Data Connection. Knowledge of PowerBi, ETL process, RLS, and Dataflow would be an added advantage. Strong hands-on experience with Python and PySpark, demonstrating the ability to write, debug, and optimize code for data analysis and transformation. Competence in analyzing data and efficiently troubleshooting issues using PySpark and SQL. Familiarity with Data Ingestion, including data loading expertise with Oracle databases, SharePoint, and API calls. Comfortable working in Agile development methodologies, adapting to changing project requirements and priorities. Effective verbal and written communication skills to collaborate with team members and stakeholders. Capability to adhere to development best practices, including maintaining code standards, unit testing, integration testing, and quality assurance processes. Primary Skills Palantir tools Python/Pyspark Database Management Secondary Skills Excellent verbal and written communication and interpersonal skills Ability to work independently and within a team environment" " Process Management-Good at figuring out the processes necessary to get things done, knowing how to organize people and activities, knowing what to measure and how to measure it, Can simplify complex processes, Gets more out of fewer resources Problem Solving - Uses rigorous logic and methods to solve difficult problems with effective solutions, probes all fruitful sources for answers, can see hidden problems, Is excellent at honest analysis Looks beyond the obvious, and doesn't stop at the first answers Decision quality – makes good decisions based upon a mixture of analysis, wisdom, experience, and judgment Drive for results – Critical thinking: Critical thinking is the ability to analyze a situation and make a decision based on the information you have. As an automation engineer, you may be required to make decisions about how to best implement automation processes. Having strong critical thinking skills can help you make the best decision for your company. Critical thinking: Critical thinking is the ability to analyze a situation and make a decision based on the information you have. As an automation engineer, you may be required to make decisions about how to best implement automation processes. Having strong critical thinking skills can help you make the best decision for your company Communication: Communication is an essential skill for automation engineers, as they often work with other engineers and other professionals in other departments. Effective communication can help you collaborate with others, share ideas and explain technical concepts. can be counted on to exceed goals successfully Interpersonal savvy – relates well to all kinds of people; builds appropriate rapport."
Posted 2 days ago
3.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Job Description Job Description Join a dynamic and diverse global team dedicated to developing innovative solutions that uncover the complete consumer journey for our clients. We are seeking a highly skilled Data Scientist with strong development skills in programming languages such as Python. Additionally, expertise in statistics, mathematics, econometrics, and experience with panel data to revolutionize the way we measure consumer behavior both online and in-store. Looking ahead, we are excited to find someone who will join our team in developing a tool that can simulate the impact of production process changes on client data. This tool outside of the production factory will allow the wider Data Science team to drive innovation with unpresented efficiency. About The Role Collaborative Environment: Work with an international team in a flexible and supportive setting, fostering cross-functional collaboration between data scientists, engineers, and product stakeholders Tool Ownership and Development: Take ownership of a core Python-based tool, ensuring its continued development, scalability, and maintainability. Use robust engineering practices such as version control, testing and PRs Innovative Solution Development: Collaborate closely with subject matter experts to understand complex methodologies. Translate these into scalable, production-ready implementations within the Python tool. Design and implement new features and enhancements to the tool to address evolving market challenges and improve team efficiency Methodology Enhancement: Evaluate and improve current methodologies, including data cleaning, preparation, quality tracking, and consumer projection, with a strong focus on automation and reproducibility Documentation & Code Quality: Maintain comprehensive documentation of the tool’s architecture, usage, and development roadmap. Ensure high code quality through peer reviews and adherence to best practices Research and Analysis: Conduct rigorous research and analysis to inform tool improvements and ensure alignment with business needs. Communicate findings and recommendations clearly to both technical and non-technical audiences Deployment and Support: Support the production deployment of new features and enhancements. Monitor tool performance and address issues proactively to ensure reliability and user satisfaction Cross-Team Coordination: Coordinate efforts across multiple teams and stakeholders to ensure seamless integration of the tool into broader workflows and systems Qualifications About You Ideally you possess a good understanding of consumer behavior, panel-based projections, and consumer metrics and analytics. You have successfully designed and developed software applying statistical and data analytical methods and demonstrated your ability to handle complex data sets. Experience with (un)managed crowdsourced panels and receipt capture methodologies is an advantage. Educational Background: Bachelor’s or Master’s Degree in Computer Science, Software Engineering, Mathematics, Statistics, Socioeconomics, Data Science, or a related field with a minimum of 3 years of relevant experience Programming Proficiency: Proficient with Python or another programming language, R, C++ or JAVA, with a willingness to learn Python Software Engineering Skills: Strong software engineering skills, including experience designing and developing software; optionally, experience with version control systems GitHub or Bitbucket Data Analysis Skills: Proficiency in manipulating, analyzing, and interpreting large data sets Data Handling: Experience using Spark, specifically with PySpark package, experience working with large-scale datasets. Optionally, experience in SQL and working with queries Continuous Learning: Eagerness to adopt and develop evolving technologies and tools Statistical Expertise: Statistical and logical skills, experience in data cleaning, and data aggregation techniques Communication and Collaboration: Strong communication, writing, and collaboration skills Nice to Have Consumer Insights: Knowledge of consumer behavior and (un)managed consumer-related crowdsourced panels Technology Skills: Familiarity with technology stacks for cloud computing (AzureAI, , Databricks, Snowflake) Production Support:Experience or interest in supporting technology teams in production deployment Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough