Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Andhra Pradesh, India
On-site
Organization Summary A career within Operations Consulting services, will provide you with the opportunity to help our clients optimize all elements of their operations to move beyond the role of a cost-effective business enabler and become a source of competitive advantages. We focus on product innovation and development, supply chain, procurement and sourcing, manufacturing operations, service operations and capital asset programs to drive both growth and profitability. In our Operations Transformation team, you’ll work with our clients to transform their enterprise processes by leveraging integrated supply and demand planning solutions to enhance their core transaction processing and reporting competencies, ultimately strengthening their ability to support management decision-making and corporate strategy. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be purpose-led and values-driven leaders at every level. To help us achieve this, we have the PwC Professional, our global leadership development framework. It gives us a single set of expectations across our lines, geographies, and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. We are seeking an experienced Senior Consultant with a strong technical background and extensive experience in implementing o9, Blue Yonder (BY), Kinaxis or SAP IBP solutions for planning. The ideal candidate will have completed multiple implementations and possess deep knowledge of these products and their technical architectures. This role focuses on the ability to translate business requirements to technical and architecture needs, and lead / support on the implementation journey. Key Responsibilities Drive a workstream while executing technical implementation of o9, BY, Kinaxis or SAP IBP solutions Collaborate with stakeholders to understand business requirements and translate them into technical specifications Develop materials to assist in design and process discussions with clients Conduct design discussions with client to align on key design decisions Support the design and architecture of the overall technology framework for implementations Develop testing strategies and test scenarios Identify gaps and develop custom design specifications Troubleshoot and resolve technical issues that arise during implementation Ensure best practices and quality standards are followed across the engagement delivery Conduct training sessions and knowledge transfer to internal teams and client teams Travel may be required for this role, depending on client requirements Education Degrees/Field of Study required: MBA / MTech or a Master's degree in a related field Certifications Certifications related to o9, Blue Yonder, SAP IBP, Kinaxis or other relevant technologies Required Skills Functional and technical expertise in o9, Blue Yonder, Kinaxis or SAP IBP, including reference model configurations and workflows Supply chain planning domain experience in demand planning, supply and inventory planning, production planning, S&OP, and IBP Optional Skills Advanced understanding of data models for o9, Blue Yonder, Kinaxis or SAP IBP Experience with other supply chain planning solutions Database: SQL, Python on HADOOP, R-Scripts MS SSIS integration skills, HADOOP HIVE Travel Requirements Yes Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Pune, Maharashtra, India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: We are seeking an experienced Data Engineer with a strong background in Scala development, advanced SQL, and big data technologies, particularly Apache Spark. The candidate will be responsible for designing, building, optimizing, and maintaining highly scalable and reliable data pipelines and data infrastructure. Key Responsibilities: Data Pipeline Development: Design, develop, test, and deploy robust, high-performance, and scalable ETL/ELT data pipelines using Scala and Apache Spark to ingest, process, and transform large volumes of structured and unstructured data from diverse sources. Big Data Expertise: Leverage expertise in the Hadoop ecosystem (HDFS, Hive, etc.) and distributed computing principles to build efficient and fault-tolerant data solutions. Advanced SQL: Write complex, optimized SQL queries and stored procedures. Performance Optimization: Continuously monitor, analyze, and optimize the performance of data pipelines and data stores. Troubleshoot complex data-related issues, identify bottlenecks, and implement solutions for improved efficiency and reliability. Data Quality & Governance: Implement data quality checks, validation rules, and reconciliation processes to ensure the accuracy, completeness, and consistency of data. Contribute to data governance and security best practices. Automation & CI/CD: Implement automation for data pipeline deployment, monitoring, and alerting using tools like Apache Airflow, Jenkins, or similar CI/CD platforms. Documentation: Create and maintain comprehensive technical documentation for data architectures, pipelines, and processes. Required Skills & Qualifications: Bachelor's or master's degree in computer science, Engineering, or a related quantitative field. Minimum 5 years of professional experience in Data Engineering, with a strong focus on big data technologies. Proficiency in Scala for developing big data applications and transformations, especially with Apache Spark. Expert-level proficiency in SQL ; ability to write complex queries, optimize performance, and understand database internals. Extensive hands-on experience with Apache Spark (Spark SQL, Data Frames, RDDs) for large-scale data processing and analytics. Solid understanding of distributed computing concepts and experience with the Hadoop ecosystem (HDFS, Hive). Experience with building and optimizing ETL/ELT processes and data warehousing concepts. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Ab Initio Development & Optimization: Design, develop, test, and deploy high-performance, scalable ETL/ELT solutions using Ab Initio components (GDE, Co>Operating System, EME, Control Center). Translate complex business requirements and data transformation rules into efficient and maintainable Ab Initio graphs and plans. Optimize existing Ab Initio applications for improved performance, resource utilization, and reliability. Troubleshoot, debug, and resolve complex data quality and processing issues within Ab Initio graphs and systems. Data Modeling & Advanced SQL: Apply expertise in advanced SQL to write complex queries for data extraction, transformation, validation, and analysis across various relational databases (e.g., DB2, Oracle, SQL Server). Design and implement efficient relational data models (e.g., Star Schema, Snowflake Schema, 3NF) for data warehousing and analytics. Understand and apply big data modeling concepts (e.g., denormalization for performance, schema-on-read, partitioning strategies for distributed systems). Spark & Big Data Integration: Collaborate with data architects on data integration strategies in a hybrid environment, understanding how Ab Initio processes interact with or feed into big data platforms. Analyze and debug data flow issues that may span across traditional ETL and big data platforms (e.g., HDFS, Hive, Spark). Demonstrate strong foundational knowledge in Apache Spark, including understanding Spark SQL and DataFrame operations, to comprehend and potentially assist in debugging Spark-based pipelines. Collaboration & Documentation: Work effectively with business analysts, data architects, QA teams, and other developers to deliver high-quality data solutions. Create and maintain comprehensive technical documentation for Ab Initio graphs, data lineage, data models, and ETL processes. Participate in code reviews, design discussions, and contribute to best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 5+ years of hands-on, in-depth development experience with Ab Initio GDE, Co>Operating System, and EME. Expert-level proficiency in SQL for complex data manipulation, analysis, and optimization across various relational databases. Solid understanding of relational data modeling concepts and experience designing logical and physical data models. Demonstrated proficiency or strong foundational knowledge in Apache Spark (Spark SQL, DataFrames) and familiarity with the broader Hadoop ecosystem (HDFS, Hive). Experience with Unix/Linux shell scripting. Strong understanding of ETL processes, data warehousing concepts, and data integration patterns. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams Show more Show less
Posted 1 week ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Challenge: Search, Discovery, and Content AI (SDC) is a cornerstone of Adobe’s ecosystem, enabling creative professionals and everyday users to access, discover, and demonstrate a wide array of digital assets and creative content, including images, videos, documents, vector graphics and more. With increasing demand for intuitive search, contextual discovery, and seamless content interactions across Adobe products like Express, Lightroom, and Adobe Stock, SDC is evolving into a generative AI powerhouse. This team develops innovative solutions for intent understanding, personalized recommendations, and action orchestration to transform how users interact with content. Working with extensive datasets and pioneering technologies, you will help redefine the discovery experience and drive user success. If working at the intersection of machine learning, generative AI, and real-time systems excites you, we’d love to have you join us. Responsibilities: As a Machine Learning Engineer on the SDC team, you will develop and optimize machine learning models and algorithms for search, recommendation, and content understanding across diverse content types. You will build and deploy scalable generative AI solutions to enable intelligent content discovery and contextual recommendations within Adobe products. Collaborating with multi-functional teams, you will integrate ML models into production systems, ensuring high performance, reliability, and user impact. Your role will involve researching, designing, and implementing pioneering techniques in natural language understanding, computer vision, and multimodal learning for content and asset discovery. You will also contribute to the end-to-end ML pipeline, including data preprocessing, model training, evaluation, deployment, and monitoring, while pushing the boundaries of computational efficiency to meet the needs of real-time, large-scale applications. Partnering with product teams, identify customer needs and translate them into innovative solutions that prioritize usability and performance. Additionally, you will mentor and provide technical guidance to junior engineers and multi-functional collaborators, driving excellence and innovation within the team. What You’ll Need to Succeed: Bachelor’s or equivalent experience, advanced degree such as a Ph.D. in Computer Science, Machine Learning, Data Science, or a related field. 6–9 years of industry experience building and deploying machine learning systems at scale. Proficiency in programming languages such as Python (for ML/AI) and Java (for production-grade systems). Strong expertise in machine learning frameworks and tools (e.g., TensorFlow, PyTorch). Solid understanding of mathematics and ML fundamentals: linear algebra, statistics, optimization, and numerical methods. Experience with deep learning techniques for computer vision (e.g., CNNs, transformers), natural language understanding (e.g., BERT, GPT), or multimodal AI. Consistent track record of delivering ML solutions to production environments, optimizing performance, and ensuring reliability. Knowledge of large-scale distributed systems and frameworks (e.g., Kubernetes, Spark, Hadoop). Strong problem-solving skills and the ability to innovate new solutions for complex challenges. Excellent communication and collaboration skills to work efficiently in a fast-paced, multi-functional environment. Nice-to-Haves: Experience with generative AI (e.g., Stable Diffusion, DALL·E, MidJourney) and its application in content creation or discovery. Knowledge of computational geometry, 3D modeling, or animation pipelines. Familiarity with real-time recommendation systems or search indexing. Publications in peer-reviewed journals or conferences in relevant fields. Why Join Us? Be part of a dynamic, innovative team shaping the future of AI-powered creative tools. Work on impactful, large-scale projects that touch millions of users daily. Enjoy the benefits and resources of a leading global company with the agility of a startup environment. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 6-9 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300079 Show more Show less
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Sr. Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 6-9 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300041 Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300075 Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 3-6 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300028 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description: Data Engineer – Early Careers / Trainee Location: India – Gurgaon Immediate joiners only Department: Public Cloud – Offerings and Delivery – Cloud Data Services / Hybrid Job Summary: We are looking for a motivated Fresher/Trainee Data Engineer to join our Cloud Data Services team. As a trainee, you will learn and contribute to the design, development, and maintenance of data pipelines that enable analytics and business decision-making in cloud and hybrid environments. This role is ideal for recent graduates or entry-level candidates passionate about data and cloud technologies. Key Responsibilities: Assist in developing and maintaining scalable and efficient data pipelines under the guidance of senior engineers Support data extraction, transformation, and loading (ETL/ELT) processes Learn and apply data quality, governance, and validation practices Participate in developing data models for structured and semi-structured data Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders Follow best practices in version control (Git) and Data pipelines /DevOps/MLOps principles Document data workflows, pipelines, and learnings for future reference Stay updated with new data engineering tools and technologies Education & Qualifications: Bachelor’s Degree (or final year) in Computer Science, Information Technology, Data Engineering, or related fields Coursework or academic projects in Databases, Data Warehousing, Data Structures, Python/Java/Scala, and SQL Familiarity with Cloud Platforms (AWS, Azure, or Google Cloud) is a plus Knowledge of ETL processes, Data Modeling concepts, or Big Data technologies is desirable (Hadoop, Spark) Technical Skills (Good to Have / Will Learn On-the-Job): Basic knowledge of Python or SQL programming Exposure to data integration tools or scripting Understanding of relational and NoSQL databases Familiarity with data visualization tools like Power BI or Tableau (optional) Interest in Cloud Technologies (AWS S3, Azure Data Lake, GCP BigQuery) Soft Skills: Strong analytical and problem-solving mindset Eagerness to learn new technologies and take on challenges Good written and verbal communication Ability to work both independently and within a team environment Attention to detail and time management What You Will Gain: Hands-on experience with cloud-based data platforms Exposure to real-world data engineering projects Training in ETL pipelines, data modeling, and cloud data services Opportunity to transition to full-time Data Engineer role based on performance Show more Show less
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Bengaluru
Hybrid
Job Title / Primary Skill: Big Data Developer (Lead/Associate Manager) Management Level: G150 Years of Experience: 8 to 13 years Job Location: Bangalore (Hybrid) Must Have Skills: Big data, Spark, Scala, SQL, Hadoop Ecosystem. Educational Qualification: BE/BTech/ MTech/ MCA, Bachelor or masters degree in Computer Science, Job Overview Overall Experience 8+ years in IT, Software Engineering or relevant discipline. Designs, develops, implements, and updates software systems in accordance with the needs of the organization. Evaluates, schedules, and resources development projects; investigates user needs; and documents, tests, and maintains computer programs. Job Description: We look for developers to have good knowledge of Scala programming skills and Knowledge of SQL Technical Skills: Scala, Python -> Scala is often used for Hadoop-based projects, while Python and Scala are choices for Apache Spark-based projects. SQL -> Knowledge of SQL (Structured Query Language) is important for querying and manipulating data Shell Script -> Shell scripts are used for batch processing of data, it can be used for scheduling the jobs and shell scripts are often used for deploying applications Spark Scala -> Spark Scala allows you to write Spark applications using the Spark API in Scala Spark SQL -> It allows to work with structured data using SQL-like queries and Data Frame APIs. We can execute SQL queries against Data Frames, enabling easy data exploration, transformation, and analysis. The typical tasks and responsibilities of a Big Data Developer include: 1. Data Ingestion: Collecting and importing data from various sources, such as databases, logs, APIs into the Big Data infrastructure. 2. Data Processing: Designing data pipelines to clean, transform, and prepare raw data for analysis. This often involves using technologies like Apache Hadoop, Apache Spark. 3. Data Storage: Selecting appropriate data storage technologies like Hadoop Distributed File System (HDFS), HIVE, IMPALA, or cloud-based storage solutions (Snowflake, Databricks).
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300075 Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 6-9 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300079 Show more Show less
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Sr. Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 6-9 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300041 Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 3-6 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300028 Show more Show less
Posted 1 week ago
4.0 - 9.0 years
9 - 18 Lacs
Pune, Gurugram
Work from Office
The first Data Engineer specializes in traditional ETL with SAS DI and Big Data (Hadoop, Hive). The second is more versatile, skilled in modern data engineering with Python, MongoDB, and real-time processing.
Posted 1 week ago
2.0 - 4.0 years
13 - 17 Lacs
Gurugram, Delhi / NCR
Hybrid
Role & responsibilities Understanding business use cases and be able to convert to technical design Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners. You will be designing scalable, testable and maintainable data pipelines Identify areas for data governance improvements and help to resolve data quality problems through the appropriate choice of error detection and correction, process control and improvement, or process design changes Developing metrics to measure effectiveness and drive adoption of Data Governance policies and standards that will be applied to mitigate identified risks across the data lifecycle (e.g., capture / production, aggregation / processing, reporting / consumption). You will continuously monitor, troubleshoot, and improve data pipelines and workflows to ensure optimal performance and cost-effectiveness. Reviewing architecture and design on various aspects like scalability, security, design patterns, user experience, non-functional requirements and ensure that all relevant best practices are followed. Key Skills required : 2-4 years of experience in data engineering roles. Advanced SQL skills with a focus on optimisation techniques Big data and Hadoop experience, with a focus on Spark, Hive (or other query engines), big data storage formats (such as Parquet, ORC, Avro). Cloud experience (GCP preferred) with solutions designed and implemented at production scale Strong understanding of key GCP services, especially those related to data processing [Batch/Real Time] Big Query, Cloud Scheduler, Airflow, Cloud Logging and Monitoring Hands-on experience with Git, advanced automation capabilities & shell scripting. Experience in design, development and implementation of data pipelines for Data Warehousing applications Hands on experience in performance tuning and debugging ETL jobs
Posted 1 week ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Detailed Job Description For Solution Architect At PAN India Architectural Assessment Road mapping Conduct a comprehensive assessment of the current R&D Data Lake architecture. Propose and design the architecture for the next-generation self-service R&D Data Lake based on defined product specifications. Contribute to defining a detailed architectural roadmap that incorporates the latest enterprise patterns and strategic recommendations for the engineering team. Data Ingestion & Processing Enhancements Design and prototype updated data ingestion mechanisms that meet GxP validation requirements and improve data flow efficiency. Architect advanced data and metadata processing techniques to enhance data quality and accessibility Storage Patterns Optimization Evaluate optimized storage patterns to ensure scalability, performance, and cost-effectiveness. Design updated storage solutions aligned with technical roadmap objectives and compliance standards. Data Handling & Governance Define and document standardized data handling procedures that adhere to GxP and data governance policies. Collaborate with governance teams to ensure procedures align with regulatory standards and best practices. Assess current security measures and implement robust access controls to protect sensitive R&D data. Ensure that all security enhancements adhere to enterprise security frameworks and regulatory requirements. Design and implement comprehensive data cataloguing procedures to improve data discoverability and usability. Integrate cataloguing processes with existing data governance frameworks to maintain continuity and compliance. Recommend and oversee the implementation of new tools and technologies related to ingestion, storage, processing, handling, security, and cataloguing. Design and plan to ensure seamless integration and minimal disruption during technology updates. Collaborate on the ongoing maintenance and provide technical support for legacy data ingestion pipelines throughout the uplift project. Ensure legacy systems remain stable, reliable, and efficient during the transition period Work closely with the R&D IT team, data governance groups, and other stakeholders for coordinated and effective implementation of architectural updates. Collaborate in the knowledge transfer sessions to equip internal teams to manage and maintain the new architecture post-project. Required Skills Bachelor’s degree in Computer Science, Information Technology, or a related field with equivalent hands-on experience. Minimum 10 years of experience in solution architecture, with a strong background in data architecture and enterprise data management Strong understanding of cloud-native platforms, with a preference for AWS. Knowledgeable in distributed data architectures, including services like S3, Glue, and Lake Formation. Proven experience in programming languages and tools relevant to data engineering (e.g., Python, Scala). Experienced with Big Data technologies like: Hadoop, Cassandra, Spark, Hive, and Kafka. Skilled in using querying tools such as Redshift, Spark SQL, Hive, and Presto. Demonstrated experience in data modeling, data pipelines development and data warehousing. Infrastructure And Deployment Familiar with Infrastructure-as-Code tools, including Terraform and CloudFormation. Experienced in building systems around the CI/CD concept. Hands-on experience with AWS services and other cloud platforms. Show more Show less
Posted 1 week ago
0.0 - 5.0 years
3 - 8 Lacs
Hyderabad/Secunderabad, Chennai, Bangalore/Bengaluru
Hybrid
• Bachelor's degree in Computer Science or a related field • 0 - 5 years of experience in software development (Freshers can also apply) • Strong problem-solving and analytical skills
Posted 1 week ago
3.0 - 7.0 years
15 - 20 Lacs
Hyderabad, Gurugram
Work from Office
Role: Hadoop Data Engineer Location: Gurgaon / Hyderabad Work Mode: Hybrid Employment Type: Full-Time Interview Mode: First Video then In Person Job Description Job Overview: We are looking for experienced Data Engineers proficient in Hadoop, Hive, Python, SQL, and Pyspark/Spark to join our dynamic team. Candidates will be responsible for designing, developing, and maintaining scalable big data solutions. Key Responsibilities: Develop and optimize data pipelines for large-scale data processing. Work with structured and unstructured datasets to derive actionable insights. Collaborate with cross-functional teams to enhance data-driven decision-making. Ensure the performance, scalability, and reliability of data architectures. Implement best practices for data security and governance.
Posted 1 week ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Experienced in Core Java & Spring boot Design, develop, and implement Kafka-based microservices using Spring Boot. Build data pipelines for ingesting, processing, and analyzing large-scale data sets. Optimize Kafka configurations for performance and reliability. Work with Big Data technologies such as Hadoop, Spark, and NoSQL databases. Ensure data security, integrity, and compliance with industry standards. Troubleshoot and resolve issues related to Kafka topics, consumers, and producers. Monitor system performance and proactively address bottlenecks. Participate in code reviews and mentor junior developers. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. Director - Data Engineering As a Director, you will be leading a team focused on scaling and managing Data Engineering solutions and teams focused on Verizon Consumer and Verizon Business domain. In addition to this, you will be setting the standards and building reusable engineering frameworks and services to industrialize the way data gets acquired to processed to how it gets consumed. In this setup, you will be leading a team of people managers, senior data engineers and architects. You will build best-in-class solutions and services on premise, in Cloud and / or on the edge with the ability to handle both real-time and batch workloads. You will also be devising AI strategies to make Data intake to consumption, a seamless and efficient process. You will also play a key role in making ours a Great place to work through a strong focus on employee engagement, enablement and talent development. Building a diverse team with a strong leadership pipeline and an inclusive culture and also attract and retain top Engineering talent. Contributing to driving world class employee satisfaction through the implementation of relevant organizational initiatives that build on employee commitment. Mentoring and guiding Senior Data experts and people leaders. Defining & continuously measuring the objectives & key results to drive an outcome-oriented practice across the team. Enabling career development via access to training resources, on-the-job coaching and mentoring. Creating the strategy for the problem space in line with the overall vision of the organization and through feedback from internal & external stakeholders. Implementing reusable and fit-for-future data optimization techniques and services through an automate-first mindset. Implementing modern Data industrialization practices across Cloud, Big Data and Relational database platforms & services. Driving a “Data Product” first mindset and building solutions aligning with those principles Applying an AI first approach to building data that delivers intelligent insights Serving as a technical thought leader and advisor to the organization Having an external presence in the industry Establishing best practices and governance routines to ensure adherence to SLA and FinOps. Staying informed on the latest advancements in AI and Data technology space, finding ways to deliver value by applying and customizing these to specific problem statements in the enterprise. Fostering an innovation culture where our employees are engaged in pioneering work that can be shared externally in a number of ways like patents, open source, publications, etc. Inspiring our team to create monetizable services that can be leveraged both within Verizon and externally with Verizon’s partners. What We’re Looking For... You are a dynamic leader with outstanding analytical capabilities and strong technology leadership along with hands-on and leadership experience in Data engineering areas along with end-to-end product development and delivery experience. You are constantly challenging the status quo and identifying new innovative ways to drive your team towards new ways of working that creates efficiencies. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Six or more years of relevant work experience. Experience in leading and building Data and Full stack engineering teams with specific focus on GCP, AWS and Hadoop. Experience in Data Architecture Experience in applying AI to the lifecycle of data Experience in delivering business outcomes in large and complex business organizations. Experience in communicating complex designs and outcomes to a non-technical audience. Willingness to travel up to approximately 15% of the time. Even better if you have one or more of the following: Master’s degree in Computer Science or an MBA. Strong knowledge on Consumer and Business enterprise domain with expertise in Telecom Experience in Relational Database Management Systems (RDBMS) Experience in implementing Gen AI and Agentic AI solutions Experience in end-to-end Product lifecycle management Experience in driving significant cost optimization on Google Cloud #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 1 week ago
4.0 - 5.0 years
0 Lacs
Hyderābād
On-site
USI DT Canada MF is an integral part of the Information Technology Services group. The principle focus of this organization is the development and maintenance of technology solutions that e-enable the delivery of Function and Marketplace Services and Management Information Systems. Solutions Delivery group develops and maintains solutions built on varied technologies like Siebel, PeopleSoft Microsoft technologies, SAP, Hadoop, ETL, BI and Lotus Notes. Solutions Delivery Canada has various groups which provide the best of the breed solutions to the clients by following a streamlined system development methodology. Solutions Delivery comprises of groups like Usability, Application Architecture, Development and Quality Assurance and Performance. Role Specific Responsibilities / Work you’ll do Performing troubleshooting and problem resolution of any complex application built Support and coordinate the efforts of Subject Matter Experts, Development, Quality Assurance, Usability, Training, Transport Management, and other internal resources for the successful implementation of system enhancements and fixes Perform SAP HANA programming as required Create and maintain internal documentation and end-user training materials as needed. Work with team members to analyze, plan, design, develop, and implement solutions to meet strategic, usability, performance, reliability, control, and security requirements The team EDC Canada is the Canada CIO’s IT department which manages an end-to-end portfolio of Canada business applications and technology infrastructure that supports business processes common to Deloitte Canada member firm. Cutting Edge Technologies: At USI DT Canada MF, you will be part of an exciting journey that will keep you ahead of the curve. Be it our innovative delivery model for agile or our Communities of Practices, we are constantly investing in leading edge technologies to give our practitioners a world class experience. We have programs and projects spanning across a multitude of technologies and always abreast on evolving technologies and emerging industry leading practices such as agile. Application Development and Solutions Delivery: Start from Architecture and User Experience and evolve into design, develop, transform, re-platform, or custom-build systems in complex business scenarios. We manage a portfolio of enterprise scale applications and solutions used by practitioners in Canada. Offerings include Custom Development, Packaged Application Development, Application Architecture and Testing Advisory Services. Technologies include Business Analytics, Business intelligence, Cloud Development, Mobile, .Net, SharePoint, SAP HANA, Manual, Automated, and Performance testing. Location : Hyderabad Work shift Timings : 11 AM to 8 PM Qualifications Essential A Computer Science University degree and/or equivalent work experience A strong commitment to professional client service excellence Excellent interpersonal relations and demonstrated ability to work with others effectively in teams Good verbal and written communications skills Excellent Analytical Skill Top 3 Keywords: Native HANA Suite , SQL , Calculation Views Technical Skills and Qualifications BE / B. Tech graduate 4 to 5 years of experience in design & development of HANA calculation views. Hands on with SQL is a must. Good knowledge of stored procedures is desirable. Translate the Business KPIs into HANA & Define the Reporting data models. Effective communication skills, including ability to write in a well-organized, cohesive, and concise manner Experience in working with onshore and offshore delivery model Define reusable components/frameworks, common schemas, standards to be used & tools to be used Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304281
Posted 1 week ago
5.0 years
6 - 7 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role you will be Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Education: Bachelor’s degree in Computer Science, Information Systems, or a related field. Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills:Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. Benefits:Competitive salary and comprehensive benefits package. Opportunity to work in a dynamic and collaborative environment on cutting-edge data projects. Professional development opportunities to enhance your skills and advance your career. If you are a passionate data engineer with expertise in ETL processes and a desire to make a significant impact within our organization, we encourage you to apply for this exciting opportunity! You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 1 week ago
3.0 years
4 - 6 Lacs
Hyderābād
On-site
- 3+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines As a Data Engineer you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Key job responsibilities * Design, implement and support an analytical data platform solutions for data driven decisions and insights * Design data schema and operate internal data warehouses & SQL/NOSQL database systems * Work on different data model designs, architecture, implementation, discussions and optimizations * Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. * Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency * Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. * Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. * Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers * Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. * Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. * Enjoy working closely with your peers in a group of talented engineers and gain knowledge. * Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. * Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.
These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.
The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.
In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.
In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.
As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.