Home
Jobs

2453 Hive Jobs - Page 28

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

3 - 7 Lacs

Chennai

On-site

GlassDoor logo

The Apps Support Sr Analyst is a seasoned professional role. Applies in-depth disciplinary knowledge, contributing to the development of new techniques and the improvement of processes and work-flow for the area or function. Integrates subject matter and industry expertise within a defined area. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the function and overall business. Evaluates moderately complex and variable issues with substantial potential impact, where development of an approach/taking of an action involves weighing various alternatives and balancing potentially conflicting situations using multiple sources of information. Requires good analytical skills in order to filter, prioritize and validate potentially complex and dynamic material from multiple sources. Strong communication and diplomacy skills are required. Regularly assumes informal/formal leadership role within teams. Involved in coaching and training of new recruits. Significant impact in terms of project size, geography, etc. by influencing decisions through advice, counsel and/or facilitating services to others in area of specialization. Work and performance of all teams in the area are directly affected by the performance of the individual. 6-8 years of strong Application production support experience in the financial industry Experience using call/ticketing software Hadoop/Big Data Platform Working knowledge of various components and technologies under Cloudera distribution like HDFS, Hive, Impala, Spark, YARN, Sentry, Oozie, Kafka. Very good knowledge on analyzing the bottlenecks on the cluster - performance tuning, effective resource usage, capacity planning, investigating. Perform daily performance monitoring of the cluster - Implement best practices, ensure cluster staility and create/analyze performance metrics. Hands-on experience in supporting applications built on Hadoop. Linux 4 - 6 years of experience Database Good SQL experience in any of the RDBMS. Scheduler Autosys / CONTROL-M or other schedulers will be of added advantage. Programming Languages UNIX shell scripting, Python / PERL will be of added advantage. Other Applications Knowledge / working experience of ITRS Active Console/other monitoring tools. - Job Family Group: Technology - Job Family: Applications Support - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

5.0 years

7 - 9 Lacs

Chennai

On-site

GlassDoor logo

Chennai, Tamil Nadu, India Qualification : Skills: 5+ years of experience with Java + Bigdata as minimum required skill . Java, Micorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark Skills Required : Java ,Bigdata ,Spark Role : Skills: 5+ years of experience with Java + Bigdata as minimum required skill . Java, Micorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark Experience : 5 to 7 years Job Reference Number : 13049

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai Desired Experience Range: 4 - 6 Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

8 - 12 Lacs

Mumbai

Work from Office

Naukri logo

The SAS to Databricks Migration Developer will be responsible for migrating existing SAS code, data processes, and workflows to the Databricks platform. This role requires expertise in both SAS and Databricks, with a focus on converting SAS logic into scalable PySpark and Python code. The developer will design, implement, and optimize data pipelines, ensuring seamless integration and functionality within the Databricks environment. Collaboration with various teams is essential to understand data requirements and deliver solutions that meet business needs

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Hyderabad, Telangana, India;Gurgaon, Haryana, India Qualification : Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role : Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 5 to 10 years Job Reference Number : 11078

Posted 1 week ago

Apply

8.0 - 12.0 years

6 - 7 Lacs

Noida

On-site

GlassDoor logo

Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Do you love to work on bleeding-edge Big Data technologies, do you want to work with the best minds in the industry, and create high-performance scalable solutions? Do you want to be part of the team that is solutioning next-gen data platforms? Then this is the place for you. You want to architect and deliver solutions involving data engineering on a Petabyte scale of data, that solve complex business problems Impetus is looking for a Big Data Developer that loves solving complex problems, and architects and delivering scalable solutions across a full spectrum of technologies. Experience in providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, etc. Should be able to communicate with the customer in the functional and technical aspects Expert-level proficiency in Python/Pyspark Hands-on experience with Shell/Bash Scripting (creating, and modifying scripting files) Control-M, AutoSys, Any job scheduler experience Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies). Should be able to guide the team for any functional and technical issues Strong technical development experience in effectively writing code, code reviews, and best practices code refactoring. Passionate for continuous learning, experimenting, ing and contributing towards cutting-edge open-source technologies and software paradigms Good communication, problem-solving & interpersonal skills. Self-starter & resourceful personality with the ability to manage pressure situations. Capable of providing the design and Architecture for typical business problems. Exposure and awareness of complete PDLC/SDLC. Out of box thinker and not just limited to the work done in the projects. Must Have Experience with AWS(EMR, Glue, S3, RDS, Redshift, Glue) Cloud Certification Skills Required : AWS, Pyspark, Spark Role : valuate and recommend the Big Data technology stack best suited for customer needs. Design/ Architect/ Implement various solutions arising out of high concurrency systems Responsible for timely and quality deliveries Anticipate on technological evolutions Ensure the technical directions and choices. Develop efficient ETL pipelines through spark or Hive. Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open-source technologies related to Big Data across multiple engagements Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with a considerable amount (GB/TB) of data Identify and work on incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.) Experience : 8 to 12 years Job Reference Number : 12400

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Noida

On-site

GlassDoor logo

Noida/ Indore/ Bangalore;Bangalore, Karnataka, India;Indore, Madhya Pradesh, India;Gurugram, Haryana, India Qualification : OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Skills Required : OLPA, MDX, SQL Role : Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 3 to 6 years Job Reference Number : 10350

Posted 1 week ago

Apply

15.0 years

5 - 8 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Pune, Maharashtra, India;Hyderabad, Telangana, India;Noida, Uttar Pradesh, India Qualification : 15+ years of experience in the role of managing and implementing of high-end software products. Expertise in Java/ J2EE or EDW/SQL OR Hadoop/Hive/Spark and preferably hands-on. Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have Managed/ delivered and implemented complex projects dealing with considerable data size (TB/ PB) and with high complexity Experience in handling migration projects Good to have: Data Ingestion, Processing and Orchestration knowledge Skills Required : Java Architecture, Big Data, Cloud Technologies Role : Senior Technical Project Managers (STPMs) are in charge of handling all aspects of technical projects. This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. You should collaborate with, and leverage, colleagues in business development, product management, analytics, marketing, engineering, and partner organizations. You have to manage multiple projects and ensures all releases on time. You are responsible for manage and deliver the technical solution to support an organization’s vision and strategic direction. The technology program manager delivers the technical solution to support an organization’s vision and strategic direction. You should be capable to working with a different type of customer and should possess good customer handling skills. Experience in working in ODC model and capable of presenting the Technical Design and Architecture to Senior Technical stakeholders. Should have experience in defining the project and delivery plan for each assignment Capable of doing resource allocations as per the requirements for each assignment Should have experience of driving RFPs. Should have experience of Account management – Revenue Forecasting, Invoicing, SOW creation etc. Experience : 15 to 20 years Job Reference Number : 13010

Posted 1 week ago

Apply

12.0 years

5 - 6 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Indore

On-site

GlassDoor logo

Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Noida, Uttar Pradesh, India Qualification : Pre-Sales Solution Engineer - India Experience areas or Skills : Pre-Sales experience of Software or analytics products Excellent verbal & written communication skills OLAP tools or Microsoft Analysis services (MSAS) Data engineering or Data warehouse or ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Tableau or Micro strategy or any BI tool Hive QL or Spark SQL or PLSQL or TSQL Writing and troubleshooting SQL programming or MDX queries Working on Linux programming in Python, Java or Java Script would be a plus Filling RFP or Questioner from Customer NDA, Success Criteria, Project closure and other Documentation Be willing to travel or relocate as per requirement Role : Acts as main point of contact for Customer contacts involved in the evaluation process Product demonstrations to qualified leads Product demonstrations in support of marketing activity such as events or webinars Own RFP, NDA, PoC success criteria document, POC Closure and other documents Secures alignment on Process and documents with the customer / prospect Owns the technical win phases of all active opportunities Understand Customer domain and database schema Providing OLAP and Reporting solution Work closely with customers for understanding and resolving environment or OLAP cube or reporting related issues Co-ordinate with solutioning team for execution of PoC as per success plan Creates enhancement requests or identify requests for new features on behalf of customers or hot prospects Experience : 3 to 6 years Job Reference Number : 10771

Posted 1 week ago

Apply

3.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

We are looking for a PySpark solutions developer and data engineer who can design and build solutions for one of our Fortune 500 Client programs, which aims towards building a data standardized and curation needs on Hadoop cluster. This is high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights, and integrate with the customers critical systems. Key Responsibilities Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Develop and execute data pipeline testing processes and validate business rules and policies. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, HDFS compression codec. Create and maintain integration and regression testing framework on Jenkins integrated with Bit Bucket and/or GIT repositories. Participate in the agile development process, and document and communicate issues and bugs relative to data standards in scrum meetings. Work collaboratively with onsite and offshore team. Develop & review technical documentation for artifacts delivered. Ability to solve complex data-driven scenarios and triage towards defects and production issues. Ability to learn-unlearn-relearn concepts with an open and analytical mindset. Participate in code release and production deployment. Preferred Qualifications BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university. Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications. Expertise in handling complex large-scale Big Data environments preferably (20Tb+). Minimum 3 years of experience in the following: HIVE, YARN, HDFS. Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities. Ability to build abstracted, modularized reusable code components. Prior experience on ETL tools preferably Informatica PowerCenter is advantageous. Able to quickly adapt and learn. Able to jump into an ambiguous situation and take the lead on resolution. Able to communicate and coordinate across various teams. Are comfortable tackling new challenges and new ways of working Are ready to move from traditional methods and adapt into agile ones Comfortable challenging your peers and leadership team. Can prove yourself quickly and decisively. Excellent communication skills and Good Customer Centricity. Strong Target & High Solution Orientation. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description Overview We’re looking for a versatile and strategic Content Specialist to join our Demand Generation team. This role is ideal for a writer with 4–7 years of experience who thrives in a fast-paced, campaign-driven environment and knows how to create content that converts. You’ll support pipeline-building efforts by delivering clear, compelling content across multiple formats—from long-form assets like whitepapers to short-form ad copy, email campaigns, and video scripts. Your work will help translate campaign ideas into impactful assets tailored to buyer journeys and demand gen goals. If you enjoy crafting messaging that sparks interest and drives action, this role is for you. Key Responsibilities Campaign Content Creation Develop high-impact content to support demand gen campaigns—emails, landing pages, blogs, whitepapers, infographics, and more. Turn key campaign themes into clear, benefit-driven content that resonates with prospects across funnel stages. Email Marketing Write persuasive subject lines, crisp body copy, and strong CTAs for outbound and nurture emails. Collaborate with campaign managers to align email content with goals like lead generation, event promotion, and product education. Short-form Copywriting Create concise, engaging copy for paid channels—including LinkedIn ads, Google Display Network (GDN), and Google Search ads. Adapt messaging for different stages of the funnel and various personas across industries or roles. Video Scriptwriting Write scripts for explainers, promos, webinars, customer stories, and short-form videos that support brand and demand efforts. Partner with creative teams to visualize storylines and ensure message clarity and flow. Visual Content Support Collaborate with design teams to develop infographics and visual storytelling formats. Stakeholder Collaboration Work closely with campaign managers, product marketing, field marketing, and design to ensure message alignment and brand consistency. Translate product capabilities into prospect-friendly language without losing depth or clarity Help distill complex ideas or product benefits into visually engaging, easy-to-understand content. Qualifications 4–7 years of experience in B2B content marketing, preferably in SaaS, IT, or tech-focused demand gen teams. A strong portfolio that shows range—emails, short-form ad copy, video scripts, and long-form assets. Ability to write with clarity, brevity, and persuasion across formats and channels. Strong collaboration and communication skills—you’re comfortable interfacing with cross-functional teams. Familiarity with tools like Google Docs, project management platforms (Hive, Airtable) , and Content Management Systems (Contentful) Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

New Delhi, Chennai, Bengaluru

Hybrid

Naukri logo

Your day at NTT DATA Senior GenAI Data Engineer We are seeking an experienced Senior Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What you'll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Requirements: Bachelors degree in computer science, Engineering, or related fields (Master's recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs Location: Delhi or Bangalore Workplace type : Hybrid Working

Posted 1 week ago

Apply

2.0 - 5.0 years

5 - 10 Lacs

Noida

Work from Office

Naukri logo

Title: Role Team Lead - Business Analyst About the Role: Evangelize and demonstrate the value and impact of analytics for informed business decision-making by developing and deploying analytical solutions and providing data-driven insights to business stakeholders to understand and solve various business nuances. Key Responsibilities: The role involves working closely with Product and Business stakeholders to empower data-driven decision-making and generate insights that will help grow the key metrics. Writing SQL/HIVE queries for data mining. Performing deep data analysis on MS Excel and sharing regular actionable insights. Responsible for performing data driven analytics to generate business insights. Automating the regular reports/MIS using tools like HIVE, Google Data Studio and coordinating with different teams. Strongly follow-up with concerned teams to make sure that our business & financial metrics are met. Look at data from various cuts / cohorts to suggest insights - Analysis based on multiple cohorts -Transaction, GMV, Revenue, Gross Margin, users etc. for both offline online payments. Mandatory Technical Skills needed: Distinctive problem solving and analysis skills, combined with impeccable business judgment, Proficient in SQL/HIVE/Data Mining & Business Analytics - Proficient in Microsoft Excel Derive business insights from data with a focus on driving business level metrics Minimum 2 years of experience as Data Analyst / Business Analyst Ability to interact and convince business stakeholders Hands on with SQL (sub-query and complex query) , Excel / Google Sheets, and data visualization tools (Looker studio, Power BI) Ability to combine structured & unstructured data Worked on large datasets of the order of 5 Million Experimentative mind-set with attention to detail. Compensation: If you are the right fit, we believe in creating wealth for you With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants and we are committed to it. Indias largest digital lending story is brewing here. Its your opportunity to be a part of the story!

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments Snowflake Architect Key Responsibilities: Solution Design: Designing the overall data architecture within Snowflake, including database/schema structures, data flow patterns (ELT/ETL strategies involving Snowflake), and integration points with other systems (source systems, BI tools, data science platforms). Data Modeling: Designing efficient and scalable physical data models within Snowflake. Defining table structures, distribution/clustering keys, data types, and constraints to optimize storage and query performance. Security Architecture: Designing the overall security framework, including the RBAC strategy, data masking policies, encryption standards, and how Snowflake security integrates with broader enterprise security policies. Performance and Scalability Strategy: Designing solutions with performance and scalability in mind. Defining warehouse sizing strategies, query optimization patterns, and best practices for development teams. Ensuring the architecture can handle future growth in data volume and user concurrency. Cost Optimization Strategy: Designing architectures that are inherently cost-effective. Making strategic choices about data storage, warehouse usage patterns, and feature utilization (e.g., when to use materialized views, streams, tasks). Technology Evaluation and Selection: Evaluating and recommending specific Snowflake features (e.g., Snowpark, Streams, Tasks, External Functions, Snowpipe) and third-party tools (ETL/ELT, BI, governance) that best fit the requirements. Standards and Governance: Defining best practices, naming conventions, development guidelines, and governance policies for using Snowflake effectively and consistently across the organization. Roadmap and Strategy: Aligning the Snowflake data architecture with overall business intelligence and data strategy goals. Planning for future enhancements and platform evolution. Technical Leadership: Providing guidance and mentorship to developers, data engineers, and administrators working with Snowflake. Key Skills: Deep understanding of Snowflake's advanced features and architecture. Strong data warehousing concepts and data modeling expertise. Solution architecture and system design skills. Experience with cloud platforms (AWS, Azure, GCP) and how Snowflake integrates. Expertise in performance tuning principles and techniques at an architectural level. Strong understanding of data security principles and implementation patterns. Knowledge of various data integration patterns (ETL, ELT, Streaming). Excellent communication and presentation skills to articulate designs to technical and non-technical audiences. Strategic thinking and planning abilities. Looking for 12+ years of experience to join our team. Skills Snowflake,Data modeling,Cloud platforms,Solution architecture Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

New Delhi, Chennai, Bengaluru

Hybrid

Naukri logo

Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly experienced Senior Data Software Engineer to join our dynamic team and tackle challenging projects that will enhance your skills and career. As a Senior Engineer, your contributions will be critical in designing and implementing Data solutions across a variety of projects. The ideal candidate will possess deep experience in Big Data and associated technologies, with a strong emphasis on Apache Spark, Python, Azure and AWS. Responsibilities Develop and execute end-to-end Data solutions to meet complex business needs Work collaboratively with interdisciplinary teams to comprehend project needs and deliver superior software solutions Apply your expertise in Apache Spark, Python, Azure and AWS to create scalable and efficient data processing systems Maintain and enhance the performance, security, and scalability of Data applications Keep abreast of industry trends and technological advancements to foster continuous improvement in our development practices Requirements 5-8 years of direct experience in Data and related technologies Advanced knowledge and hands-on experience with Apache Spark High-level proficiency with Hadoop and Hive Proficiency in Python Prior experience with AWS and Azure native Cloud data services Technologies Hadoop Hive Show more Show less

Posted 1 week ago

Apply

1.0 - 2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Us: Headquartered in Noida, India, Paytm Insurance Broking Private Limited (PIBPL), a wholly owned subsidiary of One97 Communications (OCL) is an online insurance market place, that offers insurance products across all leading insurance companies, with products across auto, life and health insurance and provide policy management and claim services for our customers. Expectations/ Requirements: 1. Using automated tools to extract data from primary and secondary sources 2. Removing corrupted data and fixing coding errors and related problems 3. Developing and maintaining databases, data systems – reorganizing data in a readable format 4. Preparing reports for the management stating trends, patterns, and predictions using relevant data 5. Preparing final analysis reports for the stakeholders to understand the data-analysis steps, enabling them to take important decisions based on various facts and trends 6. Supporting the data warehouse in identifying and revising reporting requirements. 7. Setup robust automated dashboards to drive performance management 8. Derive business insights from data with a focus on driving business level metrics 9. 1 -2 years of experience in business analysis or a related field. Superpowers/ Skills that will help you succeed in this role : 1. Problem solving - Assess what data is required to prove hypotheses and derive actionable insights 2. Analytical skills - Top notch excel skills are necessary 3. Strong communication and project management skills 4. Hands on with SQL, Hive, Excel and comfortable handling very large scale data. 5. Ability to interact and convince business stakeholders. 6. Experience working with web analytics platforms is an added advantage. 7. Experimentative mindset with attention to detail. 8. Proficiency in Advance SQL , MS Excel and Python or R is a must 9. Exceptional analytical and conceptual thinking skills. 10. The ability to influence stakeholders and work closely with them to determine acceptable solutions. 11. Advanced technical skills. 12. Excellent documentation skills. 13. Fundamental analytical and conceptual thinking skills. 14. Experience creating detailed reports and giving presentations. 15. Competency in Microsoft applications including Word, Excel, and Outlook. 16. A track record of following through on commitments. 17. Excellent planning, organizational, and time management skills. 18. Experience leading and developing top-performing teams. 19. A history of leading and supporting successful projects. Education - Any graduate or a Graduate from Premium Institute is preferred. Why join us: 1. We give immense opportunities to make a difference, and have a great time doing that. 2. You are challenged and encouraged here to do meaning work for yourself and customers/clients 3. We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Essential Functions Provide technical leadership in a team that generates business insights based on big data, identify actionable recommendations, and communicate the findings to clients Required at SDS level Brainstorm innovative ways to use our unique data to answer business problems Communicate with clients to understand the challenges they face and convince them with data Extract and understand data to form an opinion on how to best help our clients and derive relevant insights Develop visualizations to make your complex analyses accessible to a broad audience Find opportunities to craft products out of analyses that are suitable for multiple clients Work with stakeholders throughout the organization to identify opportunities for leveraging Visa data to drive business solutions Mine and analyze data from company databases to drive optimization and improvement of product, marketing techniques and business strategies for Visa and its clients Assess the effectiveness and accuracy of new data sources and data gathering techniques Develop custom data models and algorithms to apply to data sets Use predictive modeling to increase and optimize customer experiences, revenue generation, data insights, and other business outcomes Partner with a variety of Visa teams to provide comprehensive solutions Synthesize ideas/proposals in writing and engage in productive discussions with external or internal stakeholders Provide guidance in modern analytic techniques and business applications to unlock the value of Visa’s unique data set, in keeping with market trends, client needs and emerging techniques Organize and manage multiple data science projects with diverse cross-functional stakeholders Qualifications  Basic Qualifications - Bachelor’s or Master’s degree in Statistics, Operations Research, Applied Mathematics, Economics, Data Science, Business Analytics, Computer Science, or a related technical field - 6 years of work experience with a bachelor’s degree or 4 years of work experience with a Master’s degree or 2 years of work experience with a PhD degree - Extracting and aggregating data from large data sets using SQL/Hive or Spark - Analyzing large data sets using programming languages such as Python/R - Developing and refining machine learning models for predictive analytics, classification and regression tasks. Preferred Qualifications - 10+ years of work experience with a bachelor’s degree or 8+ years of work experience with an Advanced Degree (e.g., Master’s, MBA) or 3 years of experience with a PhD - 6+ years’ experience in data-based decision-making or quantitative analysis - Knowledge of ETL pipelines in Spark, Python, HIVE that process transaction and account level data and standardize data fields across various data sources. - Generating and visualizing data-based insights in software such as Tableau - Communicating data-driven insights and conveying actionable recommendations - Managing analytics/data science projects from scoping to delivery, and engaging with internal/external stakeholders - Previous exposure to financial services, credit cards or merchant analytics is a plus Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Publicis Sapient is looking for a Principal Data Scientist to join its Data Science practice. The role is to not only be a trusted advisor to our clients for driving the next generation innovation in applied machine learning and statistical analysis, but also a leader in advancing the group’s capabilities into the future. As part of the team, you will be responsible for leading teams that create data driven solutions that at the core are driven by relevant learning algorithms. In this role you will educate internal and external teams on conceptual models for problem solving in the machine learning realm and help translate goals and objectives into data driven solutions. You will enjoy working with some of the most diverse data sets in the world, cutting edge technology, and the ability to see your insights turned into real business results on a regular basis. The role is critical in helping advance the application of machine learning as a core building block to core market offerings in eCommerce, advertising, AdTech and business transformation. In addition, you will be responsible for directing analysis that informs and improves the effectiveness of the planning, execution and optimization of marketing tactics. As an evangelist for data science, you will partner with leaders in various Publicis Sapient divisions, industries and geographies, to ensure that increasingly more solutions we bring to the market are data driven and are supported by a strong data sciences group. Core areas of focus for this group includes applications in customer segmentations, media and advertising optimization solutions, developing recommender systems, fraud analytics, personalization systems and forecasting. Your Impact Design and implement high performance and robust analytical models in support of product and project objectives Research and bring innovations to develop next generation solutions in core functional areas related to digital marketing & customer experience solution blocks - Content and Commerce, AdTech, Customer Relationship Management (CRM), Campaign Management Provide technical thought leadership, coaching and mentorship in the field of data science in working with engineering and other cross-functional teams. Help enhance ML ops platform to deliver some cutting-edge Generative AI propositions for multiple industry like BFSI, Retail and healthcare. Evolve the approach for the application of machine learning to existing program and project disciplines. Design controlled experiments to measure changes to the new user experience Segment customers and markets to improve targeting and messaging of product recommendations and offers Direct research and evaluation for open source and vendor solutions in the analytics platforms space to guide solutions Be responsible for solution and code quality, including providing detailed and constructive design and code reviews Help establish standards in machine learning and statistical analysis to ensure consistency in quality across projects and teams and identify relevant process efficiencies Assess client needs and requirements to ensure your team is adopting the appropriate approach to solve client challenges. Qualifications Your Skills & Experience: Ph.D in Computer Science, Math, Physics, Engineering, Statistics or other quantitative or computational field. Advanced degrees preferred 15+ years in the field of applying methods in statistical learning in developing data driven solutions preferably in the eCommerce and Adtech domain Strong understanding of Gen AI tools, and frameworks . finetuning LLM for different domains and basic understanding of LLM ops. Demonstrate proficiency with various approaches in regression, classification, and cluster analysis Must have experience in statistical programming in R, SAS, SPSS, MATLAB or Python Expertise in one or more programming languages Python, R, Scala Expertise in SQL programming languages and familiarity with Hive, PIG Benefits of Working Here: Access the regional benefits document & populate your region’s benefits below. A Tip From The Hiring Manager Ideal candidates will have prior experience in tradition AI however recent experience should be in Gen AI ,Agentic AI etc. This person should be highly organized, adapt quickly to change and hands-on at code. Additional Information Gender-Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the Team Roku pioneered TV streaming and continues to innovate and lead the industry. Continued success relies on investing in the Roku Content Platform, so we deliver high quality streaming TV experience at a global scale. As part of our Content Platform team you join a small group of highly skilled engineers, that own significant responsibility in crafting, developing and maintaining our large-scale backend systems, data pipelines, storage, and processing services. We provide all insights in regard to all content on Roku Devices. About the Role We are looking for a Senior Software Engineer with vast experience in backend development, Data Engineering and Data Analytics to focus on building next level content platform and data intelligence, which empowers Search, Recommendation, and many more critical systems across Roku Platform. This is an excellent role for a senior professional who enjoys a high level of visibility, thrives on having a critical business impact, able to make critical decisions and is excited to work on a core data platform component which is crucial for many streaming components at Roku. What You’ll Be Doing Work closely with product management team, content data platform services, and other internal consumer teams to contribute extensively to our content data platform and underlying architecture. Build low-latency and optimized streaming and batch data pipelines to enable downstream services. Build and support our Micro-services based Event-Driven Backend Systems & Data Platform. Design and build data pipelines for batch, near-real-time, and real-time processing. Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects. We’re excited if you have 8+ years professional experience as a Software Engineer. Proficiency in Java/Scala/Python. Deep understanding of backend technologies, architecture patterns, and best practices, including microservices, RESTful APIs, message queues, caching, and databases. Strong analytical and problem-solving skills, data structures and algorithms, with the ability to translate complex technical requirements into scalable and efficient solutions. Experience with Micro-service and event-driven architectures. Experience with Apache Spark and Apache Flink. Experience with Big Data Frameworks and Tools: MapReduce, Hive, Presto, HDFS, YARN, Kafka, etc. Experience with Apache Airflow or similar workflow orchestration tooling for ETL. Experience with cloud platforms: AWS (preferred), GCP, etc. Strong communication and presentation skills. BS in Computer Science; MS in Computer Science preferred. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Job Title: Big Data Engineer Java & Spark Location: Hyderabad Work Mode: Onsite (5 days a week) Experience: 5 to 10 Years Job Summary: We are hiring an experienced Big Data Engineer with strong expertise in Java , Apache Spark , and Big Data technologies . You will be responsible for designing and implementing scalable data pipelines that support real-time and batch processing for data-driven applications. Key Responsibilities: Develop and maintain scalable batch and streaming data pipelines using Java and Apache Spark Work with Hadoop , Hive , Kafka , and HDFS to manage and process large datasets Collaborate with data analysts, scientists, and other engineering teams to understand data requirements Optimize Spark jobs and ensure performance and reliability in production Maintain data quality, governance, and security best practices Required Skills: 510 years of hands-on experience in data engineering or related roles Strong programming skills in both Java Expertise in Apache Spark for data processing and transformation Good understanding of Big Data frameworks : Hadoop, Hive, Kafka, HDFS Experience with distributed systems and large-scale data processing Familiarity with cloud platforms such as AWS, GCP, or Azure Good to Have: Experience with workflow orchestration tools like Airflow or NiFi Knowledge of containerization (Docker, Kubernetes) Exposure to CI/CD pipelines and version control (e.g., Git) Education: Bachelors or Masters degree in Computer Science, Engineering, or related field Why Join Us: Be part of a high-impact data engineering team Work on modern data platforms with the latest open-source tools Strong tech culture with career growth opportunities

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Description About Cloudbees CloudBees provides the leading software delivery platform for enterprises, enabling them to continuously innovate, compete, and win in a world powered by the digital experience. Designed for the world's largest organizations with the most complex requirements, CloudBees enables software development organizations to deliver scalable, compliant, governed, and secure software from the code a developer writes to the people who use it. The platform connects with other best of breed tools, improves the developer experience, and enables organizations to bring digital innovation to life continuously, adapt quickly, and unlock business outcomes that create market leaders and disruptors. CloudBees was founded in 2010 and is backed by Goldman Sachs, Morgan Stanley,Bridgepoint Credit, HSBC, Golub Capital, Delta-v Capital, Matrix Partners, and Lightspeed Venture Partners. Visit www.cloudbees.com and follow us on Twitter, LinkedIn, and Facebook. WHAT YOU’LL DO! These are some of the tasks that you’ll be engaged on: Design, develop, and maintain automated test scripts using Playwright with TypeScript/JavaScript, as well as Selenium with Java, to ensure comprehensive test coverage across applications. Enhance the existing Playwright framework by implementing modular test design and optimizing performance, while also utilizing Cucumber for Behavior-Driven Development (BDD) scenarios. Execute functional, regression, integration, performance, and security testing of web applications, APIs and microservices. Collaborate in an Agile environment, participating in daily stand-ups, sprint planning, and retrospectives to ensure alignment on testing strategies and workflows. Troubleshoot and analyze test failures and defects using debugging tools and techniques, including logging and tracing within Playwright, Selenium, Postman, Grafana, etc. Document and report test results, defects, and issues using Jira and Confluence, ensuring clarity and traceability for all test activities. Implement page object models and reusable test components in both Playwright and Selenium to promote code reusability and maintainability. Integrate automated tests into CI/CD pipelines using Jenkins and GitHub Actions, ensuring seamless deployment and testing processes. Collaborate on Git for version control, managing branches and pull requests to maintain code quality and facilitate teamwork. Mentor and coach junior QA engineers on best practices for test automation, Playwright and Selenium usage, and CI/CD workflows. Research and evaluate new tools and technologies to enhance testing processes and coverage. WHAT DO YOU NEED TO SHINE IN THIS ROLE? Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent work experience. At least 5 years of experience in software testing, with at least 3 years of experience in test automation. Ability to write functional test, test plan and test strategies Ability to configure test environment and test data using automation tools Experience in creation of an automated regress / CI test suite using Cucumber with Playwright (Preferred) or Selenium and Rest APIs Proficient in one or more programming languages - Java, Javascript or Typescript. Experience in testing web applications, APIs, and microservices using various tools and frameworks such as Selenium, Cucumber etc. Experience in testing SAST/DAST tools (Preferred) Experience in working with cloud platforms such as AWS, Azure, GCP, etc. Experience in working with CI/CD tools such as Jenkins, GitLab, GitHub, etc. Experience in writing queries and working with databases such as MySQL, MongoDB, Neo4j, Cassandra etc. Experience in working with tools such as Postman, JMeter, Grafana, etc. Exposure to Security standards and Compliance Experience in working with Agile methodologies such as Scrum, Kanban, etc. Ability to work independently and as part of a team. Ability to learn new technologies and tools quickly and adapt to changing requirements. Highly analytical mindset, logical approach to find solutions and perform root cause analysis Able to prioritize between critical and non critical path items Excellent communication skills with ability to communicate test results to stakeholders in the functional aspect of the system and its impact. What You’ll Get Highly competitive compensation, benefits, and vacation package Ability to work for one of the fastest growing companies with some of the most talented people in the industry Team outings Fun, Hardworking, and Casual Environment Endless Growth Opportunities We have a culture of movers and shakers and are leading the way for everyone else with a vision to transform the industry. We are authentic in who we are. We believe in our abilities and strengths to change the world for the better. Being inclusive and working together is at the heart of everything we do. We are naturally curious. We ask the right questions, challenge what can be done differently and come up with intelligent solutions to the problems we find. If that’s you, get ready to bee impactful and join the hive. Scam Notice Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of CloudBees. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that CloudBees will never ask for any personal account information, such as cell phone, credit card details or bank account numbers, during the recruitment process. Additionally, CloudBees will never send you a check for any equipment prior to employment. All communication from our recruiters and hiring managers will come from official company email addresses (@cloudbees.com) or from Paylocity and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent CloudBees and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at tahelp@cloudbees.com. We take these matters very seriously and will work to ensure that any fraudulent activity is reported and dealt with appropriately. If you feel like you have been scammed in the US, please report it to the Federal Trade Commission at: https://reportfraud.ftc.gov/#/. In Europe, please contact the European Anti-Fraud Office at: https://anti-fraud.ec.europa.eu/olaf-and-you/report-fraud_en Signs of a Recruitment Scam Ensure there are no other domains before or after @cloudbees.com. For example: “name.dr.cloudbees.com” Check any documents for poor spelling and grammar – this is often a sign that fraudsters are at work. If they provide a generic email address such as @Yahoo or @Hotmail as a point of contact. You are asked for money, an “administration fee”, “security fee” or an “accreditation fee”. You are asked for cell phone account information. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 19-Jun-2025 About the role Please refer to you are Responsible for :- What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Developing and leading a high performing team, creating an environment for success by setting direction and coaching them to succeed through inspiring conversations every day. (Refer to the expectations of a manager at Tesco- the minimum standards) Promoting a culture of CI within their teams to drive operational improvements Accountable for achieving team's objectives, stakeholder management and escalation management Provides inputs that impact the functions plans, policies, influences the budget and resources in their scope. Accountable to EA and market leaderships for building the analytics road-map and improve analytical maturity of partnering functions with in depth understanding of key priorities & outcome. Accountable to shape & own the analytics workplan, proactively spot size able opportunities and deliver programs successfully that will result in disproportionate returns Thought leadership in scoping the business problems, solutions and bringing disruptive / depth oriented solutions to complex problems and institutionalize robust ways of working with business partners Partner with TBS and markets finance team to measure the value delivered through analytics initiatives Build impact driven teams by creating an environment for success by setting direction, objectives and mentor managers, and guide teams to craft analytical assets which will deliver value in sustainable manner Be the voice and represent Enterprise Analytics on internal and external forums Provides inputs that impact the functions plans, policies, influences the budget and resources in their scope Developing managers and colleagues to succeed through inspiring conversations every day You will need Understanding of machine learning techniques, Linear & Logistics regression, Decision Trees, Random Forest, XGBoost and Neural Network Knowledge of Python, SQL, Hive and Visualization tools (e.g. Tableau ) Retail Expertise, Partnership management, Analytics Conceptual application to larger business context, Storyboarding, Managing managers About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation.

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies