Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 17.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Project description Join our data engineering team to lead the design and implementation of advanced graph database solutions using Neo4j. This initiative supports the organization's mission to transform complex data relationships into actionable intelligence. You will play a critical role in architecting scalable graph-based systems, driving innovation in data connectivity, and empowering cross-functional teams with powerful tools for insight and decision-making. Responsibilities Graph Data Modeling & Implementation. Design and implement complex graph data models using Cypher and Neo4j best practices. Leverage APOC procedures, custom plugins, and advanced graph algorithms to solve domain-specific problems. Oversee integration of Neo4j with other enterprise systems, microservices, and data platforms. Develop and maintain APIs and services in Java, Python, or JavaScript to interact with the graph database. Mentor junior developers and review code to maintain high-quality standards. Establish guidelines for performance tuning, scalability, security, and disaster recovery in Neo4j environments. Work with data scientists, analysts, and business stakeholders to translate complex requirements into graph-based solutions. SkillsMust have 12+ years in software/data engineering, with at least 3-5 years hands-on experience with Neo4j. Lead the technical strategy, architecture, and delivery of Neo4j-based solutions. Design, model, and implement complex graph data structures using Cypher and Neo4j best practices. Guide the integration of Neo4j with other data platforms and microservices. Collaborate with cross-functional teams to understand business needs and translate them into graph-based models. Mentor junior developers and ensure code quality through reviews and best practices. Define and enforce performance tuning, security standards, and disaster recovery strategies for Neo4j. Stay up-to-date with emerging technologies in the graph database and data engineering space. Strong proficiency in Cypher query language, graph modeling, and data visualization tools (e.g., Bloom, Neo4j Browser). Solid background in Java, Python, or JavaScript and experience integrating Neo4j with these languages. Experience with APOC procedures, Neo4j plugins, and query optimization. Familiarity with cloud platforms (AWS) and containerization tools (Docker, Kubernetes). Proven experience leading engineering teams or projects. Excellent problem-solving and communication skills. Nice to have N/A
Posted 4 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable.
Posted 4 weeks ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a detail-oriented and proactive AWS DataOps Engineer to join our growing data team. In this role, you will be responsible for supporting and optimizing cloud-based data pipelines and ETL workflows across AWS services. You will collaborate with analytics, engineering, and operations teams to ensure the secure, reliable, and scalable movement and transformation of data. Your Key Responsibilities Monitor and maintain data pipelines using AWS Glue, EMR, Lambda, and Amazon S3. Support and enhance ETL workflows leveraging IICS (Informatica Intelligent Cloud Services), Databricks, and other AWS-native tools. Collaborate with engineering teams to manage ingestion pipelines into Amazon Redshift and perform data quality validations. Assist in job scheduling and orchestration via Apache Airflow, AWS Data Pipeline, or similar tools. Write and debug SQL queries across Redshift and other AWS databases for data analysis and transformation. Troubleshoot and perform root cause analysis of pipeline failures and performance issues in distributed systems. Participate in deployment activities using version control and CI/CD pipelines. Create and maintain SOPs, runbooks, and documentation for operational workflows. Work closely with vendors and internal teams to maintain high system availability and ensure compliance. Skills And Attributes For Success Strong knowledge of AWS data services and architecture. Ability to analyze complex workflows and proactively resolve issues related to performance or data quality. Solid troubleshooting and problem-solving skills with a strong attention to detail. Effective communication skills for collaborating across teams and documenting findings or standard practices. A self-motivated learner with a passion for process improvement and cloud technologies. Comfortable handling multiple tasks and shifting priorities in a dynamic environment. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Hands-on experience with Databricks for data engineering and transformation. Understanding of ETL processes and best practices in data movement. Working knowledge of Amazon S3, EMR (Elastic MapReduce), AWS Glue, and Lambda. Experience with Amazon Redshift, including querying and managing large analytical datasets. Familiarity with job orchestration tools like Apache Airflow or AWS Data Pipeline. Experience in IICS (Informatica Intelligent Cloud Services) or equivalent ETL tools. SQL skills for data transformation, validation, and performance tuning. Technologies and Tools Must haves S3, EMR (Elastic MapReduce), and Glue for data processing and orchestration Databricks – ability to understand and run existing notebooks for data transformation Amazon Redshift for data warehousing and SQL-based analysis Apache Airflow or AWS Data Pipeline AWS Lamda Basic operational experience with IICS (Informatica Intelligent Cloud Services) or similar ETL platforms Good to have Exposure to Power BI or Tableau for data visualization and dashboard creation. Knowledge of CDI, Informatica, or other enterprise data integration platforms. Understanding of DevOps tools and practices, especially in data pipeline CI/CD contexts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 weeks ago
5.0 years
0 Lacs
Hyderābād
On-site
(ReactJS UI, Postgres Data Modeling, Spark Analytics work). Team lead with experience leading agile dev teams, guiding them and being a role model to developers in the team (lead by example) Must be highly hands-on lead, writing code, leading team, reviewing other developers code, working closely with the architect to implement the proposed design 5+ years of experience writing enterprise grade Java or Python code (should be highly proficient) Solid understanding of data structures and fundamental algorithms (sort, select, search, queue) Solid understanding of distributed computing and/or massively parallel processing concepts and frameworks (at least one): Spark, Kafka, MapReduce, Impala 2+ years of experience writing Spark and Spark SQL routines to process large volumes of data 2+ years of experience building enterprise data platforms: Data Ingestion, Data Lake, ETL, Data Warehouse, Data Access Patterns/APIs, Reporting 2+ years of experience building ETL or ELT routines in one or more of the technologies: Spark, Kafka Decent data warehousing and data modeling skills Experience working in Linux Spring Boot/APIs implementation experience is a nice to have Azure experience is a nice to have Databricks experience is a nice to have
Posted 4 weeks ago
8.0 - 13.0 years
9 - 13 Lacs
Bengaluru
Work from Office
As a Technical Specialist, you will develop and enhance Optical Network Management applications, leveraging experience in Optical Networks. You will work with fault supervision, and performance monitoring. Collaborating in an agile environment, you will drive innovation, optimize efficiency, and explore UI technologies like React. Your role will focus on designing, coding, testing, and improving network management applications to enhance functionality and customer satisfaction. You have: Bachelor's degree and 8 years of experience (or equivalent) in Optics Network. Hands-on working experience with CORE JAVA, Spring, Kafka, Zookeeper, Hibernate, and Python. Working knowledge of RDBMS, PL-SQL, Linux, Docker, and database concepts. Exposure to UI technologies like REACT. It would be nice if you also had: Domain knowledge in OTN, Photonic network management. Strong communication skills and the ability to manage complex relationships. Develop software for Network Management of Optics Division products, including Photonic/WDM, Optical Transport, SDH, and SONET. Enable user control over network configuration through Optics Network Management applications. Utilize Core Java, Spring, Kafka, Python, and RDBMS to build high-performing solutions for network configuration. Interface Optics Network Management applications with various Network Elements, providing a user-friendly graphical interface and implementing algorithms to simplify network management and reduce OPEX. Deploy Optics Network Management applications globally, supporting hundreds of installations for customers. Contribute to new developments and maintain applications as part of the development team, focusing on enhancing functionality and customer satisfaction.
Posted 1 month ago
5.0 - 8.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional : Primary skills:Technology-Big Data - Data Processing-Spark Preferred Skills: Technology-Big Data - Data Processing-Spark
Posted 1 month ago
3.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skillsTechnology-Big Data - Data Processing-Map Reduce Preferred Skills: Technology-Big Data - Data Processing-Map Reduce
Posted 1 month ago
2.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skillsHadoop, Hive, HDFS Preferred Skills: Technology-Big Data - Hadoop-Hadoop
Posted 1 month ago
5.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering,BCA,BSc,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities "1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory)2. Good knowledge in SQL, PySpark.3. Should have knowledge in Medallion architecture pattern4. Knowledge on Integration Runtime5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc)6. Should have knowledge of AAS, Cubes.7. To create, manage and optimize the Cube processing.8. Good Communication Skills.9. Experience in leading a team" Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology-Big Data - Data Processing-Spark
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a Data Solution Architect. In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a skilled Data Solution Architect with expertise in Azure and Databricks to join our team. In this role, you will design, develop, and implement scalable big data solutions while collaborating closely with business stakeholders and technical teams to deliver cutting-edge architectures that drive business value. Responsibilities Design data analytics solutions by utilising the big data technology stack Create and present solution architecture documents with deep technical details Work closely with business in identifying solution requirements and key case-studies/scenarios for future solutions Conduct solution architecture review/audit, calculate and present ROI Lead implementation of the solutions from establishing project requirements and goals to solution "go-live" Participate in the full cycle of pre-sale activities: direct communications with customers, RFP processing, the development of proposals for implementation and design of the solution, presentation for proposed solution architecture to the customer and participate in technical meetings with customer representatives Create and follow personal education plan in the technology stack and solution architecture Maintain a strong understanding of industry trends and best practices Get involved in engaging new clients to further drive EPAM business in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design/development background in Java, Scala, or Python Background in delivering data analytics projects and establishing architecture guidelines Expertise in big data platforms both on-premises and in the cloud (Amazon Web Services, Microsoft Azure, Google Cloud) Production project experience in at least one of the big data technologies Familiarity with big data technologies such as Spark, MapReduce, Hive and batch processing systems Understanding of NoSQL databases including Cassandra, HBase, Accumulo, or Kudu Knowledge of Agile development methodology, Scrum in particular Experience in direct customer communications and pre-selling business-consulting engagements to clients within large enterprise environments Nice to have Experience working within a consulting business and pre-sales experience
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a Data Solution Architect. In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Data Solution Architect. In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Description: To work on Data Analytical to triage and investigate data quality and data pipeline exceptions and reporting issues. Requirements: This role will support Data Operations and Reporting related projects but also will be helping with other projects as well if needed. In this role, you will leverage your strong analytical skills to triage and investigate data quality and data pipeline exceptions and reporting issues. The ideal candidate should be able to work independently and actively engage other functional teams as needed. This role requires researching transactions and events using large amounts of data. Technical Experience/Qualifications: • At least 5 years of experience in software development • At least 5 years of SQL experience in any RDBMS • Minimum 5 years of experience in Python • Strong analytical and problem-solving skill • Strong communication skill • Strong experience with data modeling • Strong experience in data analysis and reporting. • Experience with version control tools such as GitHub etc. • Experience with shell scripting and Linux • Knowledge of agile and scrum methodologies • Preferred experience in Hive SQL or related technologies such as Big Query etc. • Preferred experience in Big data technologies like Hadoop, AWS/GCP, S3, HIVE, Impala, HDFS, Spark, MapReduce • Preferred experience in reporting tools such as Looker or Tableau etc. • Preferred experience in finance and accounting but not required Job Responsibilities: Responsibilities: • Develop SQL queries as per technical requirements • Investigate and fix day to day data related issues • Develop test plan and execute test script • Data validation and analysis • Develop new reports/dashboard as per technical requirements • Modify existing reports/dashboards for bug fixes and enhancements • Develop new ETL scripts and modify existing in case of bug fixes and enhancements • Monitoring of ETL processes and fix issues in case of failure • Monitor scheduled jobs and fix issues in case of failure • Monitor data quality alerts and act on it What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Data Solution Architect. In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable
Posted 1 month ago
5.0 - 7.0 years
5 - 5 Lacs
Kochi, Hyderabad, Thiruvananthapuram
Work from Office
Key Responsibilities Develop & Deliver: Build applications/features/components as per design specifications, ensuring high-quality code adhering to coding standards and project timelines. Testing & Debugging: Write, review, and execute unit test cases; debug code; validate results with users; and support defect analysis and mitigation. Technical Decision Making: Select optimal technical solutions including reuse or creation of components to enhance efficiency, cost-effectiveness, and quality. Documentation & Configuration: Create and review design documents, templates, checklists, and configuration management plans; ensure team compliance. Domain Expertise: Understand customer business domain deeply to advise developers and identify opportunities for value addition; obtain relevant certifications. Project & Release Management: Manage delivery of modules/user stories, estimate efforts, coordinate releases, and ensure adherence to engineering processes and timelines. Team Leadership: Set goals (FAST), provide feedback, mentor team members, maintain motivation, and manage people-related issues effectively. Customer Interaction: Clarify requirements, present design options, conduct demos, and build customer confidence through timely and quality deliverables. Technology Stack: Expertise in Big Data technologies (PySpark, Scala), plus preferred skills in AWS services (EMR, S3, Glue, Airflow, RDS, DynamoDB), CICD tools (Jenkins), relational & NoSQL databases, microservices, and containerization (Docker, Kubernetes). Soft Skills & Collaboration: Communicate clearly, work under pressure, handle dependencies and risks, collaborate with cross-functional teams, and proactively seek/offers help. Required Skills Big Data,Pyspark,Scala Additional Comments: Must-Have Skills Big Data (Py Spark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)
Posted 1 month ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
LinkedIn is the world’s largest professional network, built to create economic opportunity for every member of the global workforce. Our products help people make powerful connections, discover exciting opportunities, build necessary skills, and gain valuable insights every day. We’re also committed to providing transformational opportunities for our own employees by investing in their growth. We aspire to create a culture that’s built on trust, care, inclusion, and fun, where everyone can succeed. Join us to transform the way the world works. This role will be based in Bangalore, India. At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. As part of our world-class software engineering team, you will be charged with building the next-generation infrastructure and platforms for LinkedIn, including but not limited to: an application and service delivery platform, massively scalable data storage and replication systems, cutting-edge search platform, best-in-class AI platform, experimentation platform, privacy and compliance platform etc. You will work and learn among the best, putting to use your passion for distributed technologies and algorithms, API design and systems-design, and your passion for writing code that performs at an extreme scale. LinkedIn has already pioneered well-known open-source infrastructure projects like Apache Kafka, Pinot, Azkaban, Samza, Venice, Datahub, Feather, etc. We also work with industry standard open source infrastructure products like Kubernetes, GRPC and GraphQL - come join our infrastructure teams and share the knowledge with a broader community while making a real impact within our company. Responsibilities: - You will own the technical strategy for broad or complex requirements with insightful and forward-looking approaches that go beyond the direct team and solve large open-ended problems. - You will design, implement, and optimize the performance of large-scale distributed systems with security and compliance in mind. - You will Improve the observability and understandability of various systems with a focus on improving developer productivity and system sustenance - You will effectively communicate with the team, partners and stakeholders. - You will mentor other engineers, define our challenging technical culture, and help to build a fast-growing team - You will work closely with the open-source community to participate and influence cutting edge open-source projects (e.g., Apache Iceberg) - You will deliver incremental impact by driving innovation while iteratively building and shipping software at scale - You will diagnose technical problems, debug in production environments, and automate routine tasks Basic Qualifications: - BA/BS Degree in Computer Science or related technical discipline, or related practical experience. - 8+ years of industry experience in software design, development, and algorithm related solutions. - 8+ years experience programming in object-oriented languages such as Java, Python, Go and/or Functional languages such as Scala or other relevant coding languages - Hands on experience developing distributed systems, large-scale systems, databases and/or Backend APIs Preferred Qualifications: - Experience with Hadoop (or similar) Ecosystem (Gobblin, Kafka, Iceberg, ORC, MapReduce, Yarn, HDFS, Hive, Spark, Presto) - Experience with industry, open-source projects and/or academic research in data management, relational databases, and/or large-data, parallel and distributed systems - Experience in architecting, building, and running large-scale systems - Experience with open-source project management and governance Suggested Skills: - Distributed systems - Backend Systems Infrastructure - Java You will Benefit from our Culture: We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels. India Disability Policy LinkedIn is an equal employment opportunity employer offering opportunities to all job seekers, including individuals with disabilities. For more information on our equal opportunity policy, please visit https://legal.linkedin.com/content/dam/legal/Policy_India_EqualOppPWD_9-12-2023.pdf Global Data Privacy Notice for Job Candidates This document provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
LinkedIn is the world’s largest professional network, built to create economic opportunity for every member of the global workforce. Our products help people make powerful connections, discover exciting opportunities, build necessary skills, and gain valuable insights every day. We’re also committed to providing transformational opportunities for our own employees by investing in their growth. We aspire to create a culture that’s built on trust, care, inclusion, and fun, where everyone can succeed. Join us to transform the way the world works. This role will be based in Bangalore, India. At LinkedIn, our approach to flexible work is centered on trust and optimized for culture, connection, clarity, and the evolving needs of our business. The work location of this role is hybrid, meaning it will be performed both from home and from a LinkedIn office on select days, as determined by the business needs of the team. As part of our world-class software engineering team, you will be charged with building the next-generation infrastructure and platforms for LinkedIn, including but not limited to: an application and service delivery platform, massively scalable data storage and replication systems, cutting-edge search platform, best-in-class AI platform, experimentation platform, privacy and compliance platform etc. You will work and learn among the best, putting to use your passion for distributed technologies and algorithms, API design and systems-design, and your passion for writing code that performs at an extreme scale. LinkedIn has already pioneered well-known open-source infrastructure projects like Apache Kafka, Pinot, Azkaban, Samza, Venice, Datahub, Feather, etc. We also work with industry standard open source infrastructure products like Kubernetes, GRPC and GraphQL - come join our infrastructure teams and share the knowledge with a broader community while making a real impact within our company. Responsibilities - You will design, build and operate one of the online data infra platforms that power all of Linkedin’s core applications. - You will participate in design and code reviews to maintain our high development standards. - You will partner with peers, leads and internal customers to define scope, prioritize and build impactful features at a high velocity. - You will mentor other engineers and will help build a fast-growing team. - You will work closely with the open-source community to participate and influence cutting edge open-source projects Basic Qualifications - BA/BS Degree in Computer Science or related technical discipline, or related practical experience - 5+ years industry experience in software design, development, and algorithm related solutions. - 5+ years experience programming in object-oriented languages such as Java, Python, Go, and/or Functional languages such as Scala or other relevant coding languages - Hands on experience developing distributed systems, large-scale systems, databases and/or Backend APIs Preferred Qualifications - Experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto) - Experience with industry, open-source projects and/or academic research in data management, relational databases, and/or large-data, parallel and distributed systems - Experience with open-source project management and governance - Experience with cloud computing (e.g., Azure) is a plus. Suggested Skills: - Distributed systems - Backend Systems Infrastructure - Java You will Benefit from our Culture: We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels. India Disability Policy LinkedIn is an equal employment opportunity employer offering opportunities to all job seekers, including individuals with disabilities. For more information on our equal opportunity policy, please visit https://legal.linkedin.com/content/dam/legal/Policy_India_EqualOppPWD_9-12-2023.pdf Global Data Privacy Notice for Job Candidates This document provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough