Home
Jobs

2510 Hive Jobs - Page 26

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Preferred Qualifications: The display software team is looking for talented software engineers interested in developing software for mobile and embedded devices. The display software team is responsible for delivering device drivers and tools for Snapdragon chipsets, providing best in class performance, power and features. This role will involve working on the firmware development for Display. Responsibilities will include the design and development of new features, support for new hardware pre/post-silicon development, debugging of issues within software, optimizing software for performance and power, development of unit tests and working with our partners and OEMs. In addition, they will be working with other technologies including video encoders, video decoders, DSPs, and GPU for QC multimedia cores towards meeting project milestones. Principal Duties and Responsibilities: Detailed oriented with strong analytical and debugging skills. Strong working knowledge of C/C++ programming Knowledge in one or more Operating Systems (or) RTOS (Embedded Linux, Windows) Strong working knowledge of Linux Kernel. Experienced in Linux kernel architecture and driver development, such as signals, priorities, deadlocks, stacks, interrupt, memory management, scheduler, synchronization methods, etc. Understanding of low level software/hardware interface design and debugging Knowledge in one or more of the following disciplines is preferredDisplay (Pixel processing/composition, MIPI DSI, HDMI, DisplayPort, etc.), Experience in the following Display/Graphics Frameworks and platformsAndroid, Weston/Wayland. Added advantage with DRM/KMS driver. . Level of Responsibility: Works under supervision. Decision-making may affect work beyond immediate work group. Requires verbal and written communication skills to convey information. May require basic negotiation, influence, tact, etc. Tasks do not have defined steps; planning, problem-solving, and prioritization must occur to complete the tasks effectively.

Posted 1 week ago

Apply

2.0 - 7.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Applications Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Application Engineer, you will provide technical expertise of software systems through technical presentations in support of business development, product demonstrations, design and development of customer specific requirements, commercialization, and maintenance of Qualcomm products. Qualcomm Engineers collaborate with cross-functional teams and customers to address questions, issues, debugging, or troubleshooting regarding software systems and applications. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Applications Engineering, Software Development experience, or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Applications Engineering, Software Development experience, or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 1+ year of any combination of academic and/or work experience with Programming Language such as C, C++, Java, Python, etc. 1+ year of any combination of academic and/or work experience with debugging techniques. for Display: We are seeking a highly skilled Display Engineer with a solid understanding of the display stack in Linux DRM/KMS framework or in QNX. The ideal candidate will have extensive experience in developing and supporting display drivers and a strong background in multimedia domains, particularly in display and graphics. Key Responsibilities: Provide engineering support to Qualcomm IVI/ADAS customers. Collaborate with Product Development Managers (PDM) and engineering teams to address customer requirements and issues. Support and troubleshoot issues reported by customers in lab environments, drive tests, and during certifications. Perform root cause analysis of customer issues and provide detailed feedback to the engineering team. Develop and maintain Linux kernel device drivers, focusing on DRM/KMS, stability, and boot architecture. Work with Android, QNX, and hypervisor-based platforms to ensure seamless integration and performance. Utilize debug tools related to memory, gdb, and coredump for efficient problem-solving. Develop and maintain utilities and scripts using Python. Stay updated with the latest advancements in display and graphics technologies. Required Qualifications: Bachelors degree in engineering E&C or CS. Excellent communication and analytical skills. Proven experience with Linux kernel device drivers, particularly DRM/KMS. Strong understanding of stability and boot architecture. Experience with Android, QNX, and hypervisor-based platforms. Proficiency in C and C++ programming languages. Working knowledge of debug tools such as gdb and core dump. Proficiency in Python scripting. Experience in the multimedia domain, specifically in display and graphics.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Data Scientist Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Soltions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role Work closely with global optimization solutions team to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support data insights and analytical needs across products, markets, and services The candidate for this position will focus on Building solutions using Machine Learning and creating actionable insights to support product optimization and sales enablement. Prototype new algorithms, experiment, evaluate and deliver actionable insights. Drive the evolution of products with an impact focused on data science and engineering. Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Perform data ingestion, aggregation, and processing on high volume and high dimensionality data to drive and enable data unification and produce relevant insights. Continuously innovate and determine new approaches, tools, techniques & technologies to solve business problems and generate business insights & recommendations. Apply knowledge of metrics, measurements, and benchmarking to complex and demanding solutions. All About You A superior academic record at a leading university in Computer Science, Data Science, Technology, mathematics, statistics, or a related field or equivalent work experience Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Strong analytical skills with track record of translating data into compelling insights Prior experience working in a product development role. knowledge of ML frameworks, libraries, data structures, data modeling, and software architecture. proficiency in using Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), and SQL to build Big Data products & platforms Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI is a plus. Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Ability to build a strong narrative on the business value of products and actively participate in sales enablement efforts. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250830 Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Organization Summary A career within Operations Consulting services, will provide you with the opportunity to help our clients optimize all elements of their operations to move beyond the role of a cost-effective business enabler and become a source of competitive advantages. We focus on product innovation and development, supply chain, procurement and sourcing, manufacturing operations, service operations and capital asset programs to drive both growth and profitability. In our Operations Transformation team, you’ll work with our clients to transform their enterprise processes by leveraging integrated supply and demand planning solutions to enhance their core transaction processing and reporting competencies, ultimately strengthening their ability to support management decision-making and corporate strategy. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be purpose-led and values-driven leaders at every level. To help us achieve this, we have the PwC Professional, our global leadership development framework. It gives us a single set of expectations across our lines, geographies, and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. We are seeking an experienced Senior Consultant with a strong technical background and extensive experience in implementing o9, Blue Yonder (BY), Kinaxis or SAP IBP solutions for planning. The ideal candidate will have completed multiple implementations and possess deep knowledge of these products and their technical architectures. This role focuses on the ability to translate business requirements to technical and architecture needs, and lead / support on the implementation journey. Key Responsibilities Drive a workstream while executing technical implementation of o9, BY, Kinaxis or SAP IBP solutions Collaborate with stakeholders to understand business requirements and translate them into technical specifications Develop materials to assist in design and process discussions with clients Conduct design discussions with client to align on key design decisions Support the design and architecture of the overall technology framework for implementations Develop testing strategies and test scenarios Identify gaps and develop custom design specifications Troubleshoot and resolve technical issues that arise during implementation Ensure best practices and quality standards are followed across the engagement delivery Conduct training sessions and knowledge transfer to internal teams and client teams Travel may be required for this role, depending on client requirements Education Degrees/Field of Study required: MBA / MTech or a Master's degree in a related field Certifications Certifications related to o9, Blue Yonder, SAP IBP, Kinaxis or other relevant technologies Required Skills Functional and technical expertise in o9, Blue Yonder, Kinaxis or SAP IBP, including reference model configurations and workflows Supply chain planning domain experience in demand planning, supply and inventory planning, production planning, S&OP, and IBP Optional Skills Advanced understanding of data models for o9, Blue Yonder, Kinaxis or SAP IBP Experience with other supply chain planning solutions Database: SQL, Python on HADOOP, R-Scripts MS SSIS integration skills, HADOOP HIVE Travel Requirements Yes Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Key Responsibilities: Test Strategy & Planning: Develop and implement robust test strategies, detailed test plans, and comprehensive test cases for ETL processes, data migrations, data warehouse solutions, and data lake implementations. Ab Initio ETL Testing: Execute functional, integration, regression, and performance tests for ETL jobs developed using Ab Initio Graphical Development Environment (GDE), Co>Operating System, and plans deployed via Control Center. Validate data transformations, aggregations, and data quality rules implemented within Ab Initio graphs. Spark Data Pipeline Testing: Perform hands-on testing of data pipelines and transformations built using Apache Spark (PySpark/Scala Spark) for large-scale data processing in batch and potentially streaming modes. Verify data correctness, consistency, and performance of Spark jobs from source to target. Advanced Data Validation & Reconciliation: Perform extensive data validation and reconciliation activities between source, staging, and target systems using complex SQL queries. Conduct row counts, sum checks, data type validations, primary key/foreign key integrity checks, and business rule validations. Data Quality Assurance: Identify, analyze, document, and track data quality issues, anomalies, and discrepancies across the data landscape. Collaborate closely with ETL/Spark developers, data architects, and business analysts to understand data quality requirements, identify root causes, and ensure timely resolution of defects. Documentation & Reporting: Create and maintain detailed test documentation, including test cases, test results, defect reports, and data quality metrics dashboards. Provide clear and concise communication on test progress, defect status, and overall data quality posture to stakeholders. Required Skills & Qualifications: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3+ years of dedicated experience in ETL/Data Warehouse testing. Strong hands-on experience testing ETL processes developed using Ab Initio (GDE, Co>Operating System). Hands-on experience in testing data pipelines built with Apache Spark (PySpark or Scala Spark). Advanced SQL skills for data querying, validation, complex joins, and comparison across heterogeneous databases (e.g., Oracle, DB2, SQL Server, Hive, etc.). Solid understanding of ETL methodologies, data warehousing concepts (Star Schema, Snowflake Schema), and data modeling principles. Experience with test management and defect tracking tools (e.g., JIRA, Azure DevOps, HP ALM). Excellent analytical, problem-solving, and communication skills, with a keen eye for detail. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary: We are seeking an experienced Data Engineer with a strong background in Scala development, advanced SQL, and big data technologies, particularly Apache Spark. The candidate will be responsible for designing, building, optimizing, and maintaining highly scalable and reliable data pipelines and data infrastructure. Key Responsibilities: Data Pipeline Development: Design, develop, test, and deploy robust, high-performance, and scalable ETL/ELT data pipelines using Scala and Apache Spark to ingest, process, and transform large volumes of structured and unstructured data from diverse sources. Big Data Expertise: Leverage expertise in the Hadoop ecosystem (HDFS, Hive, etc.) and distributed computing principles to build efficient and fault-tolerant data solutions. Advanced SQL: Write complex, optimized SQL queries and stored procedures. Performance Optimization: Continuously monitor, analyze, and optimize the performance of data pipelines and data stores. Troubleshoot complex data-related issues, identify bottlenecks, and implement solutions for improved efficiency and reliability. Data Quality & Governance: Implement data quality checks, validation rules, and reconciliation processes to ensure the accuracy, completeness, and consistency of data. Contribute to data governance and security best practices. Automation & CI/CD: Implement automation for data pipeline deployment, monitoring, and alerting using tools like Apache Airflow, Jenkins, or similar CI/CD platforms. Documentation: Create and maintain comprehensive technical documentation for data architectures, pipelines, and processes. Required Skills & Qualifications: Bachelor's or master's degree in computer science, Engineering, or a related quantitative field. Minimum 5 years of professional experience in Data Engineering, with a strong focus on big data technologies. Proficiency in Scala for developing big data applications and transformations, especially with Apache Spark. Expert-level proficiency in SQL ; ability to write complex queries, optimize performance, and understand database internals. Extensive hands-on experience with Apache Spark (Spark SQL, Data Frames, RDDs) for large-scale data processing and analytics. Solid understanding of distributed computing concepts and experience with the Hadoop ecosystem (HDFS, Hive). Experience with building and optimizing ETL/ELT processes and data warehousing concepts. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your Primary Responsibilities Include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Preferred Education Master's Degree Required Technical And Professional Expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred Technical And Professional Experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Key Responsibilities: Ab Initio Development & Optimization: Design, develop, test, and deploy high-performance, scalable ETL/ELT solutions using Ab Initio components (GDE, Co>Operating System, EME, Control Center). Translate complex business requirements and data transformation rules into efficient and maintainable Ab Initio graphs and plans. Optimize existing Ab Initio applications for improved performance, resource utilization, and reliability. Troubleshoot, debug, and resolve complex data quality and processing issues within Ab Initio graphs and systems. Data Modeling & Advanced SQL: Apply expertise in advanced SQL to write complex queries for data extraction, transformation, validation, and analysis across various relational databases (e.g., DB2, Oracle, SQL Server). Design and implement efficient relational data models (e.g., Star Schema, Snowflake Schema, 3NF) for data warehousing and analytics. Understand and apply big data modeling concepts (e.g., denormalization for performance, schema-on-read, partitioning strategies for distributed systems). Spark & Big Data Integration: Collaborate with data architects on data integration strategies in a hybrid environment, understanding how Ab Initio processes interact with or feed into big data platforms. Analyze and debug data flow issues that may span across traditional ETL and big data platforms (e.g., HDFS, Hive, Spark). Demonstrate strong foundational knowledge in Apache Spark, including understanding Spark SQL and DataFrame operations, to comprehend and potentially assist in debugging Spark-based pipelines. Collaboration & Documentation: Work effectively with business analysts, data architects, QA teams, and other developers to deliver high-quality data solutions. Create and maintain comprehensive technical documentation for Ab Initio graphs, data lineage, data models, and ETL processes. Participate in code reviews, design discussions, and contribute to best practices within the team. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 5+ years of hands-on, in-depth development experience with Ab Initio GDE, Co>Operating System, and EME. Expert-level proficiency in SQL for complex data manipulation, analysis, and optimization across various relational databases. Solid understanding of relational data modeling concepts and experience designing logical and physical data models. Demonstrated proficiency or strong foundational knowledge in Apache Spark (Spark SQL, DataFrames) and familiarity with the broader Hadoop ecosystem (HDFS, Hive). Experience with Unix/Linux shell scripting. Strong understanding of ETL processes, data warehousing concepts, and data integration patterns. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 6-9 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300079 Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Sr. Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 6-9 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300041 Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300075 Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 3-6 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300028 Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Bengaluru

Hybrid

Naukri logo

Job Title / Primary Skill: Big Data Developer (Lead/Associate Manager) Management Level: G150 Years of Experience: 8 to 13 years Job Location: Bangalore (Hybrid) Must Have Skills: Big data, Spark, Scala, SQL, Hadoop Ecosystem. Educational Qualification: BE/BTech/ MTech/ MCA, Bachelor or masters degree in Computer Science, Job Overview Overall Experience 8+ years in IT, Software Engineering or relevant discipline. Designs, develops, implements, and updates software systems in accordance with the needs of the organization. Evaluates, schedules, and resources development projects; investigates user needs; and documents, tests, and maintains computer programs. Job Description: We look for developers to have good knowledge of Scala programming skills and Knowledge of SQL Technical Skills: Scala, Python -> Scala is often used for Hadoop-based projects, while Python and Scala are choices for Apache Spark-based projects. SQL -> Knowledge of SQL (Structured Query Language) is important for querying and manipulating data Shell Script -> Shell scripts are used for batch processing of data, it can be used for scheduling the jobs and shell scripts are often used for deploying applications Spark Scala -> Spark Scala allows you to write Spark applications using the Spark API in Scala Spark SQL -> It allows to work with structured data using SQL-like queries and Data Frame APIs. We can execute SQL queries against Data Frames, enabling easy data exploration, transformation, and analysis. The typical tasks and responsibilities of a Big Data Developer include: 1. Data Ingestion: Collecting and importing data from various sources, such as databases, logs, APIs into the Big Data infrastructure. 2. Data Processing: Designing data pipelines to clean, transform, and prepare raw data for analysis. This often involves using technologies like Apache Hadoop, Apache Spark. 3. Data Storage: Selecting appropriate data storage technologies like Hadoop Distributed File System (HDFS), HIVE, IMPALA, or cloud-based storage solutions (Snowflake, Databricks).

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300075 Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 6-9 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300079 Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Sr. Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 6-9 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300041 Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements PySpark Consultant The position is suited for individuals who have demonstrated ability to work effectively in a fast paced, high volume, deadline driven environment. Education And Experience Education: B.Tech/M.Tech/MCA/MS 3-6 years of experience in design and implementation of migrating an Enterprise legacy system to Big Data Ecosystem for Data Warehousing project. Required Skills Must have excellent knowledge in Apache Spark and Python programming experience Deep technical understanding of distributed computing and broader awareness of different Spark version Strong UNIX operating system concepts and shell scripting knowledge Hands-on experience using Spark & Python Deep experience in developing data processing tasks using PySpark such as reading data from external sources, merge data, perform data enrichment and load in to target data destinations. Experience in deployment and operationalizing the code, knowledge of scheduling tools like Airflow, Control-M etc. is preferred Working experience on AWS ecosystem, Google Cloud, BigQuery etc. is an added advantage Hands on experience with AWS S3 Filesystem operations Good knowledge of Hadoop, Hive and Cloudera/ Hortonworks Data Platform Should have exposure with Jenkins or equivalent CICD tool & Git repository Experience handling CDC operations for huge volume of data Should understand and have operating experience with Agile delivery model Should have experience in Spark related performance tuning Should be well versed with understanding of design documents like HLD, TDD etc Should be well versed with Data historical load and overall Framework concepts Should have participated in different kinds of testing like Unit Testing, System Testing, User Acceptance Testing, etc Preferred Skills Exposure to PySpark, Cloudera/ Hortonworks, Hadoop and Hive. Exposure to AWS S3/EC2 and Apache Airflow Participation in client interactions/meetings is desirable. Participation in code-tuning is desirable. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300028 Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 21 Lacs

Pune

Work from Office

Naukri logo

Hiring for Denodo Admin with 3+ years experience with below skills: Must Have: - Denodo admin logical data models, views & caching - ETL pipelines (Informatica/Talend) for EDW/data lakes, performance issues - SQL, Informatica, Talend, Big Data, Hive Required Candidate profile - Design, develop & maintain ETL pipelines using Informatica PowerCenter or Talend to extract, Hive - Optimize & troubleshoot complex SQL queries - Immediate Joiner is plus - Work from office is must

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 18 Lacs

Pune, Gurugram

Work from Office

Naukri logo

The first Data Engineer specializes in traditional ETL with SAS DI and Big Data (Hadoop, Hive). The second is more versatile, skilled in modern data engineering with Python, MongoDB, and real-time processing.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Detailed Job Description For Solution Architect At PAN India Architectural Assessment Road mapping Conduct a comprehensive assessment of the current R&D Data Lake architecture. Propose and design the architecture for the next-generation self-service R&D Data Lake based on defined product specifications. Contribute to defining a detailed architectural roadmap that incorporates the latest enterprise patterns and strategic recommendations for the engineering team. Data Ingestion & Processing Enhancements Design and prototype updated data ingestion mechanisms that meet GxP validation requirements and improve data flow efficiency. Architect advanced data and metadata processing techniques to enhance data quality and accessibility Storage Patterns Optimization Evaluate optimized storage patterns to ensure scalability, performance, and cost-effectiveness. Design updated storage solutions aligned with technical roadmap objectives and compliance standards. Data Handling & Governance Define and document standardized data handling procedures that adhere to GxP and data governance policies. Collaborate with governance teams to ensure procedures align with regulatory standards and best practices. Assess current security measures and implement robust access controls to protect sensitive R&D data. Ensure that all security enhancements adhere to enterprise security frameworks and regulatory requirements. Design and implement comprehensive data cataloguing procedures to improve data discoverability and usability. Integrate cataloguing processes with existing data governance frameworks to maintain continuity and compliance. Recommend and oversee the implementation of new tools and technologies related to ingestion, storage, processing, handling, security, and cataloguing. Design and plan to ensure seamless integration and minimal disruption during technology updates. Collaborate on the ongoing maintenance and provide technical support for legacy data ingestion pipelines throughout the uplift project. Ensure legacy systems remain stable, reliable, and efficient during the transition period Work closely with the R&D IT team, data governance groups, and other stakeholders for coordinated and effective implementation of architectural updates. Collaborate in the knowledge transfer sessions to equip internal teams to manage and maintain the new architecture post-project. Required Skills Bachelor’s degree in Computer Science, Information Technology, or a related field with equivalent hands-on experience. Minimum 10 years of experience in solution architecture, with a strong background in data architecture and enterprise data management Strong understanding of cloud-native platforms, with a preference for AWS. Knowledgeable in distributed data architectures, including services like S3, Glue, and Lake Formation. Proven experience in programming languages and tools relevant to data engineering (e.g., Python, Scala). Experienced with Big Data technologies like: Hadoop, Cassandra, Spark, Hive, and Kafka. Skilled in using querying tools such as Redshift, Spark SQL, Hive, and Presto. Demonstrated experience in data modeling, data pipelines development and data warehousing. Infrastructure And Deployment Familiar with Infrastructure-as-Code tools, including Terraform and CloudFormation. Experienced in building systems around the CI/CD concept. Hands-on experience with AWS services and other cloud platforms. Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

15 - 20 Lacs

Hyderabad, Gurugram

Work from Office

Naukri logo

Role: Hadoop Data Engineer Location: Gurgaon / Hyderabad Work Mode: Hybrid Employment Type: Full-Time Interview Mode: First Video then In Person Job Description Job Overview: We are looking for experienced Data Engineers proficient in Hadoop, Hive, Python, SQL, and Pyspark/Spark to join our dynamic team. Candidates will be responsible for designing, developing, and maintaining scalable big data solutions. Key Responsibilities: Develop and optimize data pipelines for large-scale data processing. Work with structured and unstructured datasets to derive actionable insights. Collaborate with cross-functional teams to enhance data-driven decision-making. Ensure the performance, scalability, and reliability of data architectures. Implement best practices for data security and governance.

Posted 1 week ago

Apply

5.0 years

6 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role you will be Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Education: Bachelor’s degree in Computer Science, Information Systems, or a related field. Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills:Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. Benefits:Competitive salary and comprehensive benefits package. Opportunity to work in a dynamic and collaborative environment on cutting-edge data projects. Professional development opportunities to enhance your skills and advance your career. If you are a passionate data engineer with expertise in ETL processes and a desire to make a significant impact within our organization, we encourage you to apply for this exciting opportunity! You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 1 week ago

Apply

3.0 years

4 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

- 3+ years of data engineering experience - 4+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines As a Data Engineer you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Key job responsibilities * Design, implement and support an analytical data platform solutions for data driven decisions and insights * Design data schema and operate internal data warehouses & SQL/NOSQL database systems * Work on different data model designs, architecture, implementation, discussions and optimizations * Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. * Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency * Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. * Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. * Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers * Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. * Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. * Enjoy working closely with your peers in a group of talented engineers and gain knowledge. * Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. * Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Company Profile Oceaneering is a global provider of engineered services and products, primarily to the offshore energy industry. We develop products and services for use throughout the lifecycle of an offshore oilfield, from drilling to decommissioning. We operate the world's premier fleet of work class ROVs. Additionally, we are a leader in offshore oilfield maintenance services, umbilicals, subsea hardware, and tooling. We also use applied technology expertise to serve the defense, entertainment, material handling, aerospace, science, and renewable energy industries. Since year 2003, Oceaneering’s India Center has been an integral part of operations for Oceaneering’s robust product and service offerings across the globe. This center caters to diverse business needs, from oil and gas field infrastructure, subsea robotics to automated material handling & logistics. Our multidisciplinary team offers a wide spectrum of solutions, encompassing Subsea Engineering, Robotics, Automation, Control Systems, Software Development, Asset Integrity Management, Inspection, ROV operations, Field Network Management, Graphics Design & Animation, and more. In addition to these technical functions, Oceaneering India Center plays host to several crucial business functions, including Finance, Supply Chain Management (SCM), Information Technology (IT), Human Resources (HR), and Health, Safety & Environment (HSE). Our world class infrastructure in India includes modern offices, industry-leading tools and software, equipped labs, and beautiful campuses aligned with the future way of work. Oceaneering in India as well as globally has a great work culture that is flexible, transparent, and collaborative with great team synergy. At Oceaneering India Center, we take pride in “Solving the Unsolvable” by leveraging the diverse expertise within our team. Join us in shaping the future of technology and engineering solutions on a global scale. Position Summary The Principal Data Scientist will develop Machine Learning and/or Deep Learning based integrated solutions that address customer needs such as inspection topside and subsea. They will also be responsible for development of machine learning algorithms for automation and development of data analytics programs for Oceaneering’s next generation systems. The position requires the Principal Data Scientist to work with various Oceaneering Business units across global time zones but also offers the flexibility to work in a Hybrid Work-office environment. Essential Duties And Responsibilities Lead and supervise a team of moderately experienced engineers on product/prototype design & development assignments or applications. Work both independently and collaboratively to develop custom data models and algorithms to apply on data sets that will be deployed in existing and new products. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies. Assess the effectiveness and accuracy of new data sources and data gathering techniques. Build data models and organize structured and unstructured data to interpret solutions. Prepares data for predictive and prescriptive modeling. Architect solutions by selection of appropriate technology and components Determines the technical direction and strategy for solving complex, significant, or major issues. Plans and evaluates architectural design and identifies technical risks and associated ways to mitigate those risks. Prepares design proposals to reflect cost, schedule, and technical approaches. Recommends test control, strategies, apparatus, and equipment. Develop, construct, test, and maintain architectures. Lead research activities for ongoing government and commercial projects and products. Collaborate on proposals, grants, and publications in algorithm development. Collect data as warranted to support the algorithm development efforts. Work directly with software engineers to implement algorithms into commercial software products. Work with third parties to utilize off the shelf industrial solutions. Algorithm development on key research areas based on client’s technical problem. This requires constant paper reading, and staying ahead of the game by knowing what is and will be state of the art in this field. Ability to work hands-on in cross-functional teams with a strong sense of self-direction. Non-essential Develop an awareness of programming and design alternatives Cultivate and disseminate knowledge of application development best practices Gather statistics and prepare and write reports on the status of the programming process for discussion with management and/or team members Direct research on emerging application development software products, languages, and standards in support of procurement and development efforts Train, manage and provide guidance to junior staff Perform all other duties as requested, directed or assigned Supervisory Responsibilities This position does not have direct supervisory responsibilities. Re Reporting Relationship Engagement Head Qualifications REQUIRED Bachelor’s degree in Electronics and Electrical Engineering (or related field) with eight or more years of past experience working on Machine Learning and Deep Learning based projects OR Master’s degree in Data Science (or related field) with six or more years of past experience working on Machine Learning and Deep Learning based projects DESIRED Strong knowledge of advanced statistical functions: histograms and distributions, Regression studies, scenario analysis etc. Proficient in Object Oriented Analysis, Design and Programming Strong background in Data Engineering tools like Python/C#, R, Apache Spark, Scala etc. Prior experience in handling large amount of data that includes texts, shapes, sounds, images and/or videos. Knowledge of SaaS Platforms like Microsoft Fabric, Databricks, Snowflake, h2o etc. Background experience of working on cloud platforms like Azure (ML) or AWS (SageMaker), or GCP (Vertex), etc. Proficient in querying SQL and NoSQL databases Hands on experience with various databases like MySQL/PostgreSQL/Oracle, MongoDB, InfluxDB, TimescaleDB, neo4j, Arango, Redis, Cassandra, etc. Prior experience with at least one probabilistic/statistical ambiguity resolution algorithm Proficient in Windows and Linux Operating Systems Basic understanding of ML frameworks like PyTorch and TensorFlow Basic understanding of IoT protocols like Kafka, MQTT or RabbitMQ Prior experience with bigdata platforms like Hadoop, Apache Spark, or Hive is a plus. Knowledge, Skills, Abilities, And Other Characteristics Ability to analyze situations accurately, utilizing a variety of analytical techniques in order to make well informed decisions Ability to effectively prioritize and execute tasks in a high-pressure environment Skill to gather, analyze and interpret data. Ability to determine and meet customer needs Ensures that others involved in a project or effort are kept informed about developments and plans Knowledge of communication styles and techniques Ability to establish and maintain cooperative working relationships Skill to prioritize workflow in a changing work environment Knowledge of applicable data privacy practices and laws Strong analytical and problem-solving skills. Additional Information This position is considered OFFICE WORK which is characterized as follows. Almost exclusively indoors during the day and occasionally at night Occasional exposure to airborne dust in the work place Work surface is stable (flat) The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. This position is considered LIGHT work. OCCASIONAL FREQUENT CONSTANT Lift up to 20 pounds Climbing, stooping, kneeling, squatting, and reaching Lift up to 10 pounds Standing Repetitive movements of arms and hands Sit with back supported Closing Statement In addition, we make a priority of providing learning and development opportunities to enable employees to achieve their potential and take charge of their future. As well as developing employees in a specific role, we are committed to lifelong learning and ongoing education, including developing people skills and identifying future supervisors and managers. Every month, hundreds of employees are provided training, including HSE awareness, apprenticeships, entry and advanced level technical courses, management development seminars, and leadership and supervisory training. We have a strong ethos of internal promotion. We can offer long-term employment and career advancement across countries and continents. Working at Oceaneering means that if you have the ability, drive, and ambition to take charge of your future-you will be supported to do so and the possibilities are endless. Equal Opportunity/Inclusion Oceaneering’s policy is to provide equal employment opportunity to all applicants. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Description Data Engineer Responsibilities : Deliver end-to-end data and analytics capabilities, including data ingest, data transformation, data science, and data visualization in collaboration with Data and Analytics stakeholder groups Design and deploy databases and data pipelines to support analytics projects Develop scalable and fault-tolerant workflows Clearly document issues, solutions, findings and recommendations to be shared internally & externally Learn and apply tools and technologies proficiently, including: Languages: Python, PySpark, ANSI SQL, Python ML libraries Frameworks/Platform: Spark, Snowflake, Airflow, Hadoop , Kafka Cloud Computing: AWS Tools/Products: PyCharm, Jupyter, Tableau, PowerBI Performance optimization for queries and dashboards Develop and deliver clear, compelling briefings to internal and external stakeholders on findings, recommendations, and solutions Analyze client data & systems to determine whether requirements can be met Test and validate data pipelines, transformations, datasets, reports, and dashboards built by team Develop and communicate solutions architectures and present solutions to both business and technical stakeholders Provide end user support to other data engineers and analysts Candidate Requirements Expert experience in the following[Should have/Good to have]: SQL, Python, PySpark, Python ML libraries. Other programming languages (R, Scala, SAS, Java, etc.) are a plus Data and analytics technologies including SQL/NoSQL/Graph databases, ETL, and BI Knowledge of CI/CD and related tools such as Gitlab, AWS CodeCommit etc. AWS services including EMR, Glue, Athena, Batch, Lambda CloudWatch, DynamoDB, EC2, CloudFormation, IAM and EDS Exposure to Snowflake and Airflow. Solid scripting skills (e.g., bash/shell scripts, Python) Proven work experience in the following: Data streaming technologies Big Data technologies including, Hadoop, Spark, Hive, Teradata, etc. Linux command-line operations Networking knowledge (OSI network layers, TCP/IP, virtualization) Candidate should be able to lead the team, communicate with business, gather and interpret business requirements Experience with agile delivery methodologies using Jira or similar tools Experience working with remote teams AWS Solutions Architect / Developer / Data Analytics Specialty certifications, Professional certification is a plus Bachelor Degree in Computer Science relevant field, Masters Degree is a plus Show more Show less

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies