Home
Jobs

2384 Hive Jobs - Page 35

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: This role will be responsible for: Develop and support new feeds ingestion / understand the existing framework and do the development as per the business rules and requirements. Development and maintenance of new changes / enhancements in Data Ingestion / Juniper and promoting and supporting those in the production environment within the stipulated timelines. Need to get familiar with the Data Ingestion / Data Refinery / Common Data Model / Compdata frameworks quickly and contribute to the application development as soon as possible. Methodical and measured approach with a keen eye for attention to detail; Ability to work under pressure and remain calm in the face of adversity; Ability to collaborate, interact and engage with different business, technical and subject matter experts; Good, concise, written and verbal communication Ability to manage workload from multiple requests and to balance priorities; Pro-active, a can do mind-set and attitude; Good documentation skills Requirements To be successful in this role, you should meet the following requirements: Experience (1 = essential, 2 = very useful, 3 = nice to have): Hadoop / Hive / GCP Agile / Scrum LINUX Technical skills (1 = essential, 2 = useful, 3 = nice to have): Any ETL tool Analytical trouble shooting. Hive QL On-Prem / Cloud infra knowledge You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) along with GCP. Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.

Posted 1 week ago

Apply

5.0 - 9.0 years

11 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

5 to 9 years experience Nice to have Worked in hp eco system (FDL architecture) Databricks + SQL combination is must EXPERIENCE 6-8 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): databricks, SQL

Posted 1 week ago

Apply

8.0 - 13.0 years

12 - 17 Lacs

Gurugram

Work from Office

Naukri logo

job requisition idR_288243 Company: MMC Corporate Description: We are seeking a talented individual to join our MMC Corporate team at AMSI. This role will be based in Gurgaon. This is a hybrid role that has a requirement of working at least three days a week in the office. We will count on you to Support daily production impacting incidents and implementation activities. This is an engineering role with responsibilities to provide production support, including be on-call on a rotation basis. Helps to investigate, triage, diagnose and resolve issues, working with clients (primarily internal, occasionally external), other IT departments and suppliers as appropriate; Helps to provide support until tickets are resolved. Assists in troubleshooting activities to resolve issues reported by users, as well as troubleshoot system-triggered alerts and triage issues to the appropriate technology groups. Supports business critical applications; ensures monitoring is in place, identifies and reports on service issues, and formulates proposals for problem resolutions and/or improvements. Ensures all relevant implementation, support, and change management processes are adhered to (i.e. Software Development Lifecycle and Change Management). Ensures appropriate support documentation is completed and maintained. Supports Project Teams ensuring cyber security related projects are implemented successfully and on time with minimal impact to the production environment. Execute and coordinate key strategic initiatives as determined with Global Information Security leadership. Work with cross-functional and cross-geographic teams to perform POCs to test new products, appliances and services working with vendors and tech teams across the company. Provide consultative support to cross-functional teams and application teams for Data Security-related initiatives as needed. Collaborate with internal and external groups to assemble the required information and data necessary to perform data security engineering related activities. Work with cross-functional teams to make them familiar with the Global Information Security -Data Security related policies and standards and address any questions. What you need to have Minimum of 8 years of experience (preferably consulting externally with clients or internally within large organizations). Cyber Security Experience required, minimum 5 years in data security preferred. Experience with security appliances such as Vormetric/Thales, OpenText/SecureData, AWS KMS, Azure KV,OKV preferred. Ideal candidate will have at a minimum bachelor's in computer science degree,. Industry standard certifications such as CISSP preferred. Other acceptable certifications are CompTIA Security+, GSEC, SSCP or similar. Experience with AWS or Azure Cloud is a good to have. AWS or Azure security certifications are preferred. Independent, self-starter able to work at a face pace and under pressure. Strong business acumen, problem solving and analytical mindset. Mastery of Excel and PowerPoint preferred, nice to have presentation skills. Very strong communication, interpersonal skills, and ability to work with application owners and cross functional teams across the company. Very high standards of quality, accuracy and timeliness. This is an engineering role, very strong analytic and problem-solving skills are required. Ability to work in a fast-paced environment and manage competing priorities. Motivated individual and strong team player. Innovative approach and enthusiasm to learn and explore. Have a think out of the box mentality and challenge the status quo. What makes you stand out Industry standard certifications such as CISSP preferred. Other acceptable certifications are CompTIA Security+, GSEC, SSCP or similar. Experience with AWS or Azure Cloud is good to have. AWS or Azure security certifications are preferred. Must be able to quickly and succinctly Engineer and create technical solution documentation. Must be a self-starter, work with limited supervision & be able to work well with others in a globally diverse IT environment. Understanding of cryptography as it relates to application, network, and cloud security.

Posted 1 week ago

Apply

5.0 - 7.0 years

5 - 8 Lacs

Pune

Work from Office

Naukri logo

Job Summary: Cummins is seeking a skilled Data Engineer to support the development, maintenance, and optimization of our enterprise data and analytics platform. This role involves hands-on experience in software development , ETL processes , and data warehousing , with strong exposure to tools like Snowflake , OBIEE , and Power BI . The engineer will collaborate with cross-functional teams, transforming data into actionable insights that enable business agility and scale. Please NoteWhile the role is categorized as remote, it will follow a hybrid work model based out of our Pune office . Key Responsibilities: Design, develop, and maintain ETL pipelines using Snowflake and related data transformation tools. Build and automate data integration workflows that extract, transform, and load data from various sources including Oracle EBS and other enterprise systems. Analyze, monitor, and troubleshoot data quality and integrity issues using standardized tools and methods. Develop and maintain dashboards and reports using OBIEE , Power BI , and other visualization tools for business stakeholders. Work with IT and Business teams to gather reporting requirements and translate them into scalable technical solutions. Participate in data modeling and storage architecture using star and snowflake schema designs. Contribute to the implementation of data governance , metadata management , and access control mechanisms . Maintain documentation for solutions and participate in testing and validation activities. Support migration and replication of data using tools such as Qlik Replicate and contribute to cloud-based data architecture . Apply agile and DevOps methodologies to continuously improve data delivery and quality assurance processes. Why Join Cummins? Opportunity to work with a global leader in power solutions and digital transformation. Be part of a collaborative and inclusive team culture. Access to cutting-edge data platforms and tools. Exposure to enterprise-scale data challenges and finance domain expertise . Drive impact through data innovation and process improvement . Competencies Data Extraction & Transformation - Ability to perform ETL activities from varied sources with high data accuracy. Programming - Capable of writing and testing efficient code using industry standards and version control systems. Data Quality Management - Detect and correct data issues for better decision-making. Solution Documentation - Clearly document processes, models, and code for reuse and collaboration. Solution Validation - Test and validate changes or solutions based on customer requirements. Problem Solving - Address technical challenges systematically to ensure effective resolution and prevention. Customer Focus - Understand business requirements and deliver user-centric data solutions. Communication & Collaboration - Work effectively across teams to meet shared goals. Values Differences - Promote inclusion by valuing diverse perspectives and backgrounds. Education, Licenses, Certifications Bachelor s or Master s degree in Computer Science, Information Systems, Data Engineering, or a related technical discipline. Certifications in data engineering or relevant tools (Snowflake, Power BI, etc.) are a plus. Experience Must have skills 5-7 years of experience in data engineering or software development , preferably within a finance or enterprise IT environment. Proficient in ETL tools , SQL , and data warehouse development . Proficient in Snowflake , Power BI , and OBIEE reporting platforms. Must have worked in implementation using these tools and technologies. Strong understanding of data warehousing principles , including schema design (star/snowflake), ER modeling, and relational databases. Working knowledge of Oracle databases and Oracle EBS structures. Preferred Skills: Experience with Qlik Replicate , data replication , or data migration tools. Familiarity with data governance , data quality frameworks , and metadata management . Exposure to cloud-based architectures, Big Data platforms (e.g., Spark, Hive, Kafka), and distributed storage systems (e.g., HBase, MongoDB). Understanding of agile methodologies (Scrum, Kanban) and DevOps practices for continuous delivery and improvement.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a Spark, Big Data - ETL Tech Lead for Commercial Card’s Global Data Repository development team. The successful candidate will interact with the Development Project Manager, the development, testing, and production support teams, as well as other departments within Citigroup (such as the System Administrators, Database Administrators, Data Centre Operations, and Change Control groups) for TTS platforms. He/she requires exceptional communication skills across both technology and the business and will have a high degree of visibility. The candidate will be a rigorous technical lead with a strong understanding of how to build scalable, enterprise level global applications. The ideal candidate will be dependable and resourceful software professional who can comfortably work in a large development team in a globally distributed, dynamic work environment that fosters diversity, teamwork and collaboration. The ability to work in high pressured environment is essential. Responsibilities: Lead the design and implementation of large-scale data processing pipelines using Apache Spark on BigData Hadoop Platform. Develop and optimize Spark applications for performance and scalability. Responsible for providing technical leadership of multiple large scale/complex global software solutions. Integrate data from various sources, including Couchbase, Snowflake, and HBase, ensuring data quality and consistency. Experience of developing teams of permanent employees and vendors from 5 – 15 developers in size Build and sustain strong relationships with the senior business leaders associated with the platform Design, code, test, document and implement application release projects as part of development team. Work with onsite development partners to ensure design and coding best practices. Work closely with Program Management and Quality Control teams to deliver quality software to agreed project schedules. Proactively notify Development Project Manager of risks, bottlenecks, problems, issues, and concerns. Compliance with Citi's System Development Lifecycle and Information Security requirements. Oversee development scope, budgets, time line documents Monitor, update and communicate project timelines and milestones; obtain senior management feedback; understand potential speed bumps and client’s true concerns/needs. Stay updated with the latest trends and technologies in big data and cloud computing. Mentor and guide junior developers, providing technical leadership and expertise. Key Challenges: Managing time and changing priorities in a dynamic environment Ability to provide quick turnaround to software issues and management requests Ability to assimilate key issues and concepts and come up to speed quickly Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or equivalent Minimum 10 years of Proven experience in developing and managing big data solutions using Apache Spark. Having strong hold on Spark-core, Spark-SQL & Spark Streaming Minimum 6 years of experience in leading globally distributed teams successfully. Strong programming skills in Scala, Java, or Python. Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc. Proficiency in SQL and experience with relational (Oracle/PL-SQL) and NoSQL databases like mongoDB. Demonstrated people and technical management skills. Demonstrated excellent software development skills. Strong experiences in implementation of complex file transformations like positional, xmls. Experience in building enterprise system with focus on recovery, stability, reliability, scalability and performance. Experience in working on Kafka, JMS / MQ applications. Experience in working multiple OS (Unix, Linux, Win) Familiarity with data warehousing concepts and ETL processes. Experience in performance tuning of large technical solutions with significant volumes Knowledge of data modeling, data architecture, and data integration techniques. Knowledge on best practices for data security, privacy, and compliance. Key Competencies: Excellent organization skills, attention to detail, and ability to multi-task Demonstrated sense of responsibility and capability to deliver quickly Excellent communication skills. Clearly articulating and documenting technical and functional specifications is a key requirement. Proactive problem-solver Relationship builder and team player Negotiation, difficult conversation management and prioritization skills Flexibility to handle multiple complex projects and changing priorities Excellent verbal, written and interpersonal communication skills Good analytical and business skills Promotes teamwork and builds strong relationships within and across global teams Promotes continuous process improvement especially in code quality, testability & reliability Desirable Skills: Experience in Java, Spring, ETL Tools like Talend, Ab Initio is a plus. Experience of migrating functionality from ETL tools to Spark. Experience/knowledge on Cloud technologies AWS, GCP. Experience in Financial industry ETL Certification, Project Management Certification Experience with Commercial Cards applications and processes would be advantageous Experience with Agile methodology This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Azure Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming. Experience with Azure Databricks/ADB Experience with building CI/CD pipelines in Data environments Primary Skills ADF (Azure Data Factory) OR ADB (Azure Data Bricks) Secondary Skills Excellent verbal and written communication and interpersonal skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5-8 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Key Responsibilities: The key responsibilities for the candidate are - Develop and design highly efficient ETL solutions based on the business requirements and aggressive delivery timelines. Understanding Business and Functional Requirements provided by Business Analysts and to convert into Technical Design Documents to deliver on the requirements. Ensure best practices are followed. Prepare detailed level test plans and ensure proper testing for each module developed. Prepare handover documents; manage SIT with oversight of UAT and Production Implementation. Identify and proactively resolve issues that could affect system performance, reliability, and usability. Demonstrates an in-depth understanding of how the development function integrates within overall business/technology to achieve objectives; requires a good understanding of the industry. Ensure process compliance and manage expectations of the leadership Work proactively & independently to address development requirements and articulate issues/challenges with enough lead time to address risks Design, Implement, Integrate and test new features. Explore existing application systems, determines areas of complexity, potential risks to successful implementation. Ability to build relationship with business and technology stakeholders. Successfully migrate ETL logic into Spark/Hadoop platform Person Specification Knowledge/Experience: 5 to 8 years of experience in Software development Expertise in building ETL applications using Ab Initio. Good knowledge of RDBMS – Oracle, with ability to write complex SQL needed to investigate and analyse data issues 3years of experience in big data ecosystem Hadoop, Spark & Hive will be strong plus. Good in UNIX shell scripting. Hands on experience in Autosys/Control Centre scheduling tools. Proven 5+ years of experience in working with complex data warehouses. Strong influencing & interpersonal skills Willing to work flexible hours Skills: Strong design and execution bend of mind Candidate should possess a strong work ethic, good interpersonal, communication skills, and a high energy level. Candidate should share many common traits: Analytical thinker, quick learner who is capable of organizing and structuring information effectively; Ability to prioritize and manage schedules under tight, fixed deadlines. Excellent written and verbal communication skills. Ability to build relationships at all levels. Ability to independently work with vendors in resolving issues and developing solutions Strong interpersonal skills Qualifications: Bachelor of Science or Master degree in Computer Science or Engineering or related discipline. Competencies: Strong work organization and prioritization capabilities. Takes ownership and accountability for assigned work. Ability to manage multiple activities. Focused and determined in getting the job done right. Ability to identify and manage key risks and issues. Shows drive, integrity, sound judgment, adaptability, creativity, self-awareness and an ability to multitask and prioritize. Good change management discipline ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience Your Role And Responsibilities As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Preferred Education Master's Degree Required Technical And Professional Expertise Spring Boot, Java2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred Technical And Professional Experience None Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for energetic, high-performing and highly skilled Java + Big Data Engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Enterprise Personalization portfolio focused on delivering the next generation global marketing capabilities. This team is responsible for building products that power Merchant Offers personalization for Amex card members. Job Description: - Demonstrated leadership in designing sustainable software products, setting development standards, automated code review process, continuous build and rigorous testing etc - Ability to effectively lead and communicate across 3rd parties, technical and business product managers on solution design - Primary focus is spent writing code, API specs, conducting code reviews & testing in ongoing sprints or doing proof of concepts/automation tools - Applies visualization and other techniques to fast track concepts - Functions as a core member of an Agile team driving User story analysis & elaboration, design and development of software applications, testing & builds automation tools - Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority - Identifies opportunities to adopt innovative technologies Qualification: - Bachelor's degree in computer science, computer engineering, other technical discipline, or equivalent work experience - 5+ years of software development experience - 3-5 years of experience leading teams of engineers - Demonstrated experience with Agile or other rapid application development methods - Demonstrated experience with object-oriented design and coding - Demonstrated experience on these core technical skills (Mandatory) - Core Java, Spring Framework, Java EE - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark - Relational Database (PostGreS / MySQL / DB2 etc) - Data Serialization techniques (Avro) - Cloud development (Micro-services) - Parallel & distributed (multi-tiered) systems - Application design, software development and automated testing - Demonstrated experience on these additional technical skills (Nice to Have) - Unix / Shell scripting - Python / Scala - Message Queuing, Stream processing (Kafka) - Elastic Search - AJAX tools/ Frameworks. - Web services , open API development, and REST concepts - Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing and Junit. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: Data Engineer (Scala) Must Have Experience: 5+yrs Overall Exp, 3+ yrs Relevant Exp Must Have skills: Spark, SQL, Scala Spark, SQL, Pyspark Good To have : AWS, EMR, S3, Hadoop, Ctrl M. Key responsibilities (please specify if the position is an individual one or part of a team): 1) Should be able to design strategies and programs to collect, store, analyse and visualize data from various sources. 2) Should be able to develop big data solution recommendations and ensure implementation of the chosen big data solution. 3) Needs to be able to program, preferably in different programming/scripting languages such as Scala, Python, Java, Pig or SQL. 4) Proficient knowledge in Big data frameworks Spark, Map Reduce, 5) Should have an understanding of Hadoop, Hive, HBase, MongoDB and/or MapReduce. 6) Should also have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce. 7) Tuning the Spark Engine for high volume of data ( approx billion records) processing using BDM. 8) Troubleshoot data issues, deep dive into root cause analysis of any performance issue. Show more Show less

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Visakhapatnam, Hyderabad, Bengaluru

Work from Office

Naukri logo

Should have working experience in Spark/Scala , AWS ,Bigdata Environments Hadoop, Hive, Sqoop,Python Scripting or Java Programming (Nice to Have) and willing to relocate Hyderabad

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your Primary Responsibilities Include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Preferred Education Master's Degree Required Technical And Professional Expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred Technical And Professional Experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description The ideal candidate must possess in-depth functional knowledge of the process area and apply it to operational scenarios to provide effective solutions. The role enables to identify discrepancies and propose optimal solutions by using a logical, systematic, and sequential methodology. It is vital to be open-minded towards inputs and views from team members and to effectively lead, control, and motivate groups towards company objects. Additionally, candidate must be self-directed, proactive, and seize every opportunity to meet internal and external customer needs and achieve customer satisfaction by effectively auditing processes, implementing best practices and process improvements, and utilizing the frameworks and tools available. Goals and thoughts must be clearly and concisely articulated and conveyed, verbally and in writing, to clients, colleagues, subordinates, and supervisors. Senior Process Manager Roles And Responsibilities Understand the business model on why things are the way they are, ask relevant questions and get them clarified. Breakdown complex problems in small solvable components, to be able to identify problem areas in each component. Conduct cost/benefit analysis and feasibility studies for proposed projects to aid in decision-making. Facilitate the implementation of new or improved business processes and systems. Coordinate with business stakeholders to identify gaps in data, processes and suggest process improvements. Understand and follow the project roadmap, plan data availability and coordinate with the execution team to ensure a successful execution of projects. Prescribe suitable solutions with an understanding in limitations of toolsets and available data. Manage procurement of data from various sources and perform data audits. Fetch and analyze data from disparate sources and drive meaningful insights. Provide recommendations on the business rules for effective campaign targeting. Interpret analytical results and provide insights; present key findings and recommended next steps to clients. Develop tangible analytical projects; communicate project details to clients and internal delivery team via written documents and presentations, in forms of specifications, diagrams, and data/process models. Audit deliverables ensuring accuracy by critically examining the data and reports against requirements. Collaborate on regional/global analytic initiatives and localize inputs for country campaign practices. Actively work on audience targeting insights, optimize campaigns and improve comm governance. Technical And Functional Skills Must Have BS/BA degree or equivalent professional experience required Degree. Minimum 8-10 years of professional experience in advanced analytics for a Fortune 500-scale company or a prominent consulting organization. Experience in Data Extraction tools, Advanced Excel, CRM Analytics, Campaign Marketing, and Analytics knowledge - Campaign Analytics. Strong in numerical and analytical skills. Strong in Advanced Excel (prior experience with Google sheets is an added plus) Strong analytical and storytelling skills; ability to derive relevant insights from large reports and piles of disparate data. Comfortable working autonomously with broad guidelines. Passion for data and analytics for marketing and eagerness to learn. Excellent communications skills, both written and spoken; ability to explain complex technical concepts in plain English. Ability to manage multiple priorities and projects, aligning teams to project timelines and ensuring quality of deliverables. Work with business teams to identify business use cases and develop solutions to meet these needs using analytical approaches. Manage regular reporting and ad-hoc data extract from other departments. Knowledge on analyzing digital campaigns and the tools/technologies of performance marketing. Experience with Google sheet/Excel. Good To Have Hands-on experience in digital marketing and/or 1:1 marketing in any channel; expert level knowledge in database marketing and CRM. Working knowledge in data visualization tools (Tableau, QlikView, etc.). Working knowledge of analytical/statistical techniques. Experience in Hadoop environment – Hive, Presto is a plus. Experience in Python/R. Previous consulting experience is a definite plus. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist Senior Data Scientist – Data & Analytics Our Purpose We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results. Job Title Senior Data Scientist Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Solutions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role Work closely in collaboration with multiple internal business groups across Mastercard to implement End-to-End customer lifecycle solutions whilst utilizing Reltio. The candidate for this position will focus on Building End-to-End solution using Reltio/Databricks to blend data from multiple data sources (Salesforce, BigData, SQL Server) and creating actionable insights to transform siloed data from disparate sources into unified, trusted and interoperable data. Candidate should have a strong understanding of data management concepts and experience working with the Reltio platform. Candidate should have a solid understanding of cloud-based systems and should be able to work effectively in a team environment. Strong communication and collaboration skills are also important. Creating design documents, including data models, data flow diagrams, and system architecture diagrams. All About You A superior academic record at a leading university in Computer Science, Data Science, Technology, mathematics, statistics, or a related field or equivalent work experience 5+ Years of experience in using Python/Spark, Hadoop platforms & tools (Hive, Impala), and SQL to build Big Data products & platforms 1+ Years experience working on end-to-end solutions using Reltio. Must have experience working on at least one end-to-end project on Reltio. Ability to tune Reltio to meet performance requirements. Familiar with Data Cleansing, Data Quality, Data Governance and Data Migration concepts. Familiarity with data integration technologies and techniques, including RIH processes, data pipelines, and data transformation. Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Ability to easily move between business, analytical, and technical teams and articulate solution requirements for each group. Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI will be a plus. Experience with cloud-based (SaaS) solutions, ETL processes or API integrations will be a plus. Experience on Cloud Data Platforms Azure/AWS will be a plus. Education Bachelor’s or Master’s Degree in a Computer Science, Information Technology, Engineering, Mathematics, Statistics, M.S./M.B.A. preferred Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-242417 Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Key Responsibilities: Design, develop, and optimize large-scale data pipelines and workflows using Big Data technologies such as Hadoop, Hive, Impala, Spark, and PySpark. Build and maintain data integration solutions to process structured and unstructured data from various sources. Implement and manage CI/CD pipelines to automate deployment and testing of data engineering solutions. Work with relational databases like Oracle to design and optimize data storage and retrieval. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Ensure data quality, security, and governance across all data engineering processes. Monitor and troubleshoot performance issues in data pipelines and systems. Stay updated with the latest trends and advancements in Big Data and data engineering technologies. Required Skills and Qualifications: Proven experience in Big Data technologies: Hadoop, Hive, Impala, Spark, and PySpark. Strong programming skills in Python, Java, or Scala. Hands-on experience with CI/CD tools like Jenkins, Git, or similar. Proficiency in working with relational databases, especially Oracle. Solid understanding of data modeling, ETL processes, and data warehousing concepts. Experience with cloud platforms (e.g., AWS, Azure, or GCP) is a plus. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

TomTom is currently seeking a Data Engineer to join our global Reporting, Analytics and Data Engineering team. In this role, you will contribute to delivering the most current, accurate, and detailed maps and location services for millions of drivers and users globally, supporting the advancement of autonomous driving. The Reporting, Analytics and Data Engineering team operates on a global scale, focusing on developing cutting-edge data products and services that provide comprehensive insights into map production efficiency, quality, and coverage. The impact you'll make The Data Engineer gained a strong software engineering skillset and has a good understanding of the product and the systems her team owns. She consistently delivers small to medium product improvements with little to no guidance and contributes to improving the operational aspects of her team’s systems. The data engineer estimates effort and identifies risks with reasonable accuracy, furthermore she actively participates in priority decisions and in solution and system designs beyond the scope of her own deliverables. The data engineer takes ownership for her growth and seeks opportunities for working outside of her comfort zone. She mentors more junior engineers selflessly and knows that success happens through the effectiveness of the whole team, not individual heroics. She contributes constructively to the community of Software Engineers primarily inside TomTom and occasionally outside. Summarized you have/can T-shaped skills, where SW engineering fundamentals are coupled with deep knowledge of specific technologies Show accountable behavior for successful delivery of own work Show accountable behavior for fit-for-purpose implementation of processes, policies and governance in the team Show leadership by taking ownership of and improving medium team-internal processes and ways of working or by leading medium team-internal changes What You'll Need 2+ years of experience in Software Development, most of it should be in Data Engineering. Proficiency in Python and Scala Strong knowledge in Modern Big Data Architecture and Technologies and Experience supporting technologies like Spark, Hive, Hbase, Databricks, Kafka, Unity Catalog Strong working experience in DevOps environment, and passion for CI/CD tools, Azure cloud computing and Azure Data Factory Knowledge of SQL databases and NoSQL databases. Strong understanding of industry technology (analytics, monitoring, code deployment, system scalability, load balancers, web servers) Makes meaningful contributions to data engineering projects. Strong English written and verbal communication skills, and the ability to communicate effectively in a global work culture. The ability to drive issues to resolution through communication, collaboration. Strive for providing solutions of high quality (e.g., logical, testable, maintainable, efficient, documented). Being responsible and accountable of individual work or work involved in Pair Programing. Identify risks and raise it in common forum. Continuously learn and share the knowledge. Being honest and transparent is the key. Nice to have Continuously learn and share the knowledge. Being honest and transparent is the key. What We Offer A competitive compensation package, of course. Time and resources to grow and develop, including a personal development budget and paid leave for learning days, as well as paid access to e-learning resources such as O’Reilly and LinkedIn Learning. Time to support life outside of work, with enhanced parental leave plus paid leave to care for loved ones and volunteer in local communities. Work flexibility, where TomTom’ers, in agreement with their manager and team, use both the office and home to focus, collaborate, learn and socialize. It’s all about getting the best out of both worlds – we ask TomTom’ers to come to the office two days a week, and the remaining three are free to be worked in either location. Improve your home office with a setup budget and get extra support with a monthly allowance. Enjoy options to work from your home country and abroad for a set number of days each year, to visit family and friends, or to simply explore the world we’re mapping. Take the holidays you want with a competitive holiday plan, plus an extra day off to celebrate your birthday. Join annual events like our Hackathon and DevDays to bring your ideas to life with talented teammates from around the world. Become a part of our inclusive global culture and have the chance to collaborate with a diverse community – we have over 80 nationalities at TomTom! Find out more about our global benefits and enjoy additional local benefits tailored to your location. Meet your team We’re Maps, a global team within TomTom’s Location Technology Products technical unit. Our team is driven to deliver the most up-to-date, accurate and detailed maps for hundreds of millions of users around the world. Joining our team, you’ll continuously innovate our mapmaking processes, directly contributing to our vision engineering the world's most trusted and useful map. At TomTom... You’ll help people find their way in the world. In 2004, TomTom revolutionized how the world moves with the introduction of the first portable navigation device. Now, we intend to do it again by engineering the first-ever real-time map, the smartest and most useful map on the planet. Work with a team of 3,700 unique, curious and passionate problem-solvers. Together, we’ll open up a world of possibilities for car manufacturers, enterprises and developers to help people understand and get closer to the world around them. After you apply Our recruitment team will work hard to give you a meaningful experience throughout your journey with us, no matter the outcome. Your application will be screened closely and you can rest assured that all follow-up actions will be thorough, from assessments and interviews all the way through onboarding. To find out more about our application process, check out our hiring FAQs . TomTom is an equal opportunity employer TomTom is where you can find your place in the world. Every day we welcome, nurture and celebrate differences. Why? Because your uniqueness is what makes you, you . No matter your culture or background, you’ll find your impact at TomTom. Research also shows that sometimes women and underrepresented communities can be hesitant to apply for positions unless they believe they meet 100% of the criteria. If you can relate to this, please know that we’d love to hear from you. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 12+ years of hands on experience Position: Senior Manager Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Deep expertise in AI/ML solution design, including supervised and unsupervised learning, deep learning, NLP, and optimization. Strong hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, scikit-learn, H2O, and XGBoost. Solid programming skills in Python, PySpark, and SQL, with a strong foundation in software engineering principles. Proven track record of building end-to-end AI pipelines, including data ingestion, model training, testing, and production deployment. Experience with MLOps tools such as MLflow, Airflow, DVC, and Kubeflow for model tracking, versioning, and monitoring. Understanding of big data technologies like Apache Spark, Hive, and Delta Lake for scalable model development. Expertise in AI solution deployment across cloud platforms like GCP, AWS, and Azure using services like Vertex AI, SageMaker, and Azure ML. Experience in REST API development, NoSQL database design, and RDBMS design and optimizations. Familiarity with API-based AI integration and containerization technologies like Docker and Kubernetes. Proficiency in data storytelling and visualization tools such as Tableau, Power BI, Looker, and Streamlit. Programming skills in Python and either Scala or R, with experience using Flask and FastAPI. Experience with software engineering practices, including use of GitHub, CI/CD, code testing, and analysis. Proficient in using AI/ML frameworks such as TensorFlow, PyTorch, and SciKit-Learn. Skilled in using Apache Spark, including PySpark and Databricks, for big data processing. Strong understanding of foundational data science concepts, including statistics, linear algebra, and machine learning principles. Knowledgeable in integrating DevOps, MLOps, and DataOps practices to enhance operational efficiency and model deployment. Experience with cloud infrastructure services like Azure and GCP. Proficiency in containerization technologies such as Docker and Kubernetes. Familiarity with observability and monitoring tools like Prometheus and the ELK stack, adhering to SRE principles and techniques. Cloud or Data Engineering certifications or specialization certifications (e.g. Google Professional Machine Learning Engineer, Microsoft Certified: Azure AI Engineer Associate – Exam AI-102, AWS Certified Machine Learning – Specialty (MLS-C01), Databricks Certified Machine Learning) Nice To Have Experience implementing generative AI, LLMs, or advanced NLP use cases Exposure to real-time AI systems, edge deployment, or federated learning Strong executive presence and experience communicating with senior leadership or CXO-level clients Roles And Responsibilities Lead and oversee complex AI/ML programs, ensuring alignment with business strategy and delivering measurable outcomes. Serve as a strategic advisor to clients on AI adoption, architecture decisions, and responsible AI practices. Design and review scalable AI architectures, ensuring performance, security, and compliance. Supervise the development of machine learning pipelines, enabling model training, retraining, monitoring, and automation. Present technical solutions and business value to executive stakeholders through impactful storytelling and data visualization. Build, mentor, and lead high-performing teams of data scientists, ML engineers, and analysts. Drive innovation and capability development in areas such as generative AI, optimization, and real-time analytics. Contribute to business development efforts, including proposal creation, thought leadership, and client engagements. Partner effectively with cross-functional teams to develop, operationalize, integrate, and scale new algorithmic products. Develop code, CI/CD, and MLOps pipelines, including automated tests, and deploy models to cloud compute endpoints. Manage cloud resources and build accelerators to enable other engineers, with experience in working across two hyperscale clouds. Demonstrate effective communication skills, coaching and leading junior engineers, with a successful track record of building production-grade AI products for large organizations. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 1 week ago

Apply

10.0 - 20.0 years

35 - 60 Lacs

Mumbai, India

Work from Office

Naukri logo

Design Full Stack solutions with cloud infrastructure (IAAS, PAAS, SAAS, on Premise, Hybrid Cloud) Support Application and infrastructure design and build as a subject matter expert Implement proof of concepts to demonstrate value of the solution designed Provide consulting support to ensure delivery teams build scalable, extensible, high availability, low latency, and highly usable applications Ensure solutions are aligned with requirements from all stake holders such as Consumers, Business, IT, Security and Compliance Ensure that all Enterprise IT parameters and constraints are considered as part of the design Design an appropriate technical solution to meet business requirements that may involve Hybrid cloud environments including Cloud-native architecture, Microservices, etc. Working knowledge of high availability, low latency end-to-end technology stack is especially important using both physical and virtual load balancing, caching, and scaling technology Awareness of Full stack web development frameworks such as Angular / React / Vue Awareness of relational and no relational / NoSql databases such as MongoDB / MS SQL / Cassandra / Neo4J / DynamoDB Awareness of Data Streaming platforms such as Apache Kafka / Apache Flink / AWS Kinesis Working experience of using AWS Step Functions or Azure Logic Apps with serverless Lambda or Azure Functions Optimizes and incorporates the inputs of specialists in solution design. Establishes the validity of a solution and its components with both short- term and long-term implications. Identifies the scalability options and implications on IT strategy and/or related implications of a solution and includes these in design activities and planning. Build strong professional relationships with key IT and business executives. Be a trusted advisor for Cross functional and Management Teams. Partners effectively with other teams to ensure problem resolution. Provide solutions and advice, create Architectures, PPT. Documents and effectively transfer knowledge to internal and external stakeholders Demonstrates knowledge of public cloud technology & solutions. Applies broad understanding of technical innovations & trends in solving business problems. Manage special projects and strategic initiatives as assigned by management. Implement and assist in developing policies for Information Security, and Environmental compliance, ensuring the highest standards are maintained. Ensure adherence to SLAs with internal and external customers and compliance with Information Security Policies, including risk assessments and procedure reviews.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As a Data Engineer you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Key job responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2983400 Show more Show less

Posted 1 week ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Req ID: 321816 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). "NTT DATA Services currently seeks DevOps Developers to join our team in Hyderabad Required Skills: DevOps Engineer 5+ years of hands-on experience with Big Data Technologies and Cloudera cluster implementation or administration Primary Skill: 5+ years of professional experience in Python development and CI/CD Integration Primary Skill: 3+ years Python and/or scripting experience related to automation and APIs Primary Skill: 3+ years of Ansible automation experience Primary Skill: Strong knowledge of Enterprise Linux OS in security and configuration Experience in containerization technology, deployments, monitoring, automation etc. 3+ years of hands-on experience integrating cluster metrics with Grafana or similar. Strong understanding and experience with distributed data platforms and big data eco system (eg. Hadoop, Hive, Spark) Ability to work independently and collaborate effectively within cross functional teams Strong communication and documentation skills Familiarity with RESTful APIs and web services. Knowledge of database systems (SQL and NoSQL). Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities." About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies