Home
Jobs

2453 Hive Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

2 - 6 Lacs

Pune

Work from Office

Naukri logo

Req ID: 323909 We are currently seeking a Data Ingest Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Job DutiesThe Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. This is a position within the Ingestion team of the DRIFT data ecosystem. The focus is on ingesting data in a timely , complete, and comprehensive fashion while using the latest technology available to Citi. The ability to leverage new and creative methods for repeatable data ingestion from a variety of data sources while always questioning "is this the best way to solve this problem" and "am I providing the highest quality data to my downstream partners" are the questions we are trying to solve. Responsibilities: "¢ Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements "¢ Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards "¢ Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint "¢ Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation "¢ Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals "¢ Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions "¢ Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary "¢ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Minimum Skills Required"¢ 6-10 years of relevant experience in Apps Development or systems analysis role "¢ Extensive experience system analysis and in programming of software applications "¢ Application Development using JAVA, Scala, Spark "¢ Familiarity with event driven applications and streaming data "¢ Experience with Confluent Kafka, HDFS, HIVE, structured and unstructured database systems (SQL and NoSQL) "¢ Experience with various schema and data types -> JSON, AVRO, Parquet, etc. "¢ Experience with various ELT methodologies and formats -> JDBC, ODBC, API, Web hook, SFTP, etc. "¢ Experience working within the Agile and version control tool sets (JIRA, Bitbucket, Git, etc.) "¢ Ability to adjust priorities quickly as circumstances dictate "¢ Demonstrated leadership and project management skills "¢ Consistently demonstrates clear and concise written and verbal communication

Posted 1 week ago

Apply

1.0 - 4.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 321498 We are currently seeking a Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties"¢ Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. "¢ Work closely with Data modeller to ensure data models support the solution design "¢ Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. "¢ Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. "¢ Develop documentation and artefacts to support projects Minimum Skills Required"¢ ADF "¢ Fivetran (orchestration & integration) "¢ SQL "¢ Snowflake DWH

Posted 1 week ago

Apply

8.0 - 13.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role

Posted 1 week ago

Apply

4.0 - 5.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Location Bangalore and Chennai, Hybrid mode,Immediate to 10 Days Notice period Develop reports using Amazon Quicksight Data Visualization DevelopmentDesign and develop data visualizations using Amazon Quicksight to present complex data in a clear and understandable format. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Data AnalysisCollaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Dashboard User Interface (UI) and User Experience (UX)Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices in UI/UX design. Data IntegrationWork closely with data engineers and data architects to ensure seamless integration of data sources into Quicksight, enabling real-time and up-to-date visualizations. Performance OptimizationIdentify and address performance bottlenecks in data queries and visualization rendering to ensure quick and responsive dashboards. Data Security and GovernanceEnsure compliance with data security policies and governance guidelines when handling sensitive data within Quicksight. Training and DocumentationProvide training and support to end-users and stakeholders on how to interact with and interpret visualizations effectively. Create detailed documentation of the visualization development process. Stay Updated with Industry TrendsKeep up to date with the latest data visualization trends, technologies, and best practices to continuously enhance the quality and impact of visualizations. Using the Agile Methodology, attending daily standups and use of the Agile tools Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. using Scrum/Kanban Proficiency in Software Development best practices - Secure coding standards, Unit testing frameworks, Code coverage, Quality gates. Ability to lead and deliver change in a very productive way Lead Technical discussions with customers to find the best possible solutions. W orking closely with the Project Manager, Solution Architect and managing client communication (as and when required) What should you bring along Must Have Person should have relevant work experience in analytics, reporting and business intelligence tools. 4-5 years of hands-on experience in data visualization. Relatively 2-year Experience developing visualization using Amazon Quicksight. Experience working with various data sources and databases. Ability to work with large datasets and design efficient data models for visualization. Nice to Have AI Project implementation and AI methods. Must have technical skill Quick sight , SQL , AWS Good to have Technical skills Tableau, Data Engineer

Posted 1 week ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Design and develop data visualizations using Amazon QuickSight to present complex data in clear and understandable Dashboards. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Work on Data preparation and ensure the good quality data is used in Visualization. Collaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices. Identify and address performance bottlenecks in data queries and visualization. Ensure compliance with data security policies and governance guidelines when handling sensitive data within QuickSight. Provide training and support to end-users and stakeholders on how to interact with Dashboards. Self-Managing and explore the latest technical development and incorporate in the project. Experience in analytics, reporting and business intelligence tools. Using the Agile Methodology, attending daily standups and use of the Agile tools. Lead Technical discussions with customers to find the best possible solutions. What should you bring along Must Have Overall experience of 2-5 years in Data visualization development. Minimum of 2 years in QuickSight and 1-2 years in other BI Tools like Tableau, PowerBI, Qlik Good In writing complex SQL Scripting, Dataset Modeling. Hands on in AWS -Athena, RDS, S3, IAM, permissions, Logging and monitoring Services. Experience working with various data sources and databases like Oracle, mySQL, S3, Athena. Ability to work with large datasets and design efficient data models for visualization. Prior experience in working in Agile, Scrum/Kanban working model. Nice to Have Knowledge on Data ingestion and Data pipeline in AWS. Knowledge Amazon Q or AWS LLM Service to enable AI integration Must have skill Quick sight, Tableau, SQL , AWS Good to have skills Qlikview ,Data Engineer, AWS LLM

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory Skill Sets Spark, Pyspark, Azure Preferred Skill Sets Spark, Pyspark, Azure Years Of Experience Required 8 - 12 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Noida

Work from Office

Naukri logo

We are looking for a highly experienced Senior Data Engineer with deep expertise in Snowflake to lead efforts in optimizing the performance of our data warehouse to enable faster, more reliable reporting. You will be responsible for improving query efficiency, data pipeline performance, and overall reporting speed by tuning Snowflake environments, optimizing data models, and collaborating with Application development teams. Roles and Responsibilities Analyze and optimize Snowflake data warehouse performance to support high-volume, complex reporting workloads. Identify bottlenecks in SQL queries, ETL/ELT pipelines, and data models impacting report generation times. Implement performance tuning strategies including clustering keys, materialized views, result caching, micro-partitioning, and query optimization. Collaborate with BI teams and business analysts to understand reporting requirements and translate them into performant data solutions. Design and maintain efficient data models (star schema, snowflake schema) tailored for fast analytical querying. Develop and enhance ETL/ELT processes ensuring minimal latency and high throughput using Snowflake’s native features. Monitor system performance and proactively recommend architectural improvements and capacity planning. Establish best practices for data ingestion, transformation, and storage aimed at improving report delivery times. Experience with Unistore will be an added advantage

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

NOTE: We are only looking for candidates who can join Immediately to available to join in 15 days Experience level- 6+ years Location: Bangalore (Candidates who are currently in Bangalore can apply) Qualifications we are looking for Master/Bachelor degree in Computer Science, Electrical Engineering, Information Systems or other technical discipline; advanced degree preferred. Minimum of 7+ years of software development experience (with a concentration in data centric initiatives), with demonstrated expertise in leveraging standard development best practice methodologies. Minimum 4+ years of experience in Hadoop using Core Java Programming, Spark, Scala, Hive and Go lang Expertise in Object Oriented Programming Language Java Experience using CI/CD Process, version control and bug tracking tools. Experience in handling very large data volume in Real Time and batch mode. Experience with automation of job execution and validation Strong knowledge of Database concepts Strong team player. Strong communication skills with proven ability to present complex ideas and document in a clear and concise way. Quick learner; self-starter, detailed and in-depth.

Posted 1 week ago

Apply

4.0 - 6.0 years

20 - 25 Lacs

Noida

Work from Office

Naukri logo

Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL : Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. ETL Pipeline Development : Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills Knowledge of CI/CD practices for data workflows. Key Responsibilities Collaborate with analysts and data architects to develop and test ETL pipelines using SQL and Python in Data Brick and Yellowbrick. Perform related data quality checks and implement validation frameworks. Optimize queries for performance and cost-efficiency Technical Requirements SQL (Advanced level): Strong command of complex SQL logic, including window functions, CTEs, pivot/unpivot, and be proficient in stored procedure/SQL script development. Experience writing maintainable SQL for transformations. Python for ETL : Ability to write modular and reusable ETL logic using Python. Familiarity with JSON manipulation and API consumption. ETL Pipeline Development : Experienced in developing ETL/ELT pipelines, data profiling, validation, quality/health check, error handling, logging and notifications, etc. Nice-to-Have Skills: Experiences with AWS Redshift, Databrick and Yellow brick, Knowledge of CI/CD practices for data workflows. Roles and Responsibilities Leverage expertise in AWS Redshift, PostgreSQL, Databricks, and Yellowbrick to design and implement scalable data solutions. Partner with data analysts and architects to build and test robust ETL pipelines using SQL and Python. Develop and maintain data validation frameworks to ensure high data quality and reliability. Optimize database queries to enhance performance and ensure cost-effective data processing.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers. Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

35 - 50 Lacs

Mumbai

Work from Office

Naukri logo

Hiring Big Data Lead with 8+ years experience for US Shift time: Must Have: - Big Data: Spark, Hadoop, Kafka, Hive, Flink - Backend: Python, Scala - NoSQL: MongoDB, Cassandra - Cloud: AWS/AZURE/GCP, Snowflake, Databricks - Docker, Kubernetes, CI/CD Required Candidate profile - Excellent in Mentoring/ Training in Big Data- HDFS, YARN, Airflow, Hive, Mapreduce, Hbase, Kafka & ETL/ELT, real-time streaming, data modeling - Immediate Joiner is plus - Excellent in Communication

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

Job Title: Data Engineer Experience: 5+ Years Location: Hyderabad (Onsite) Availability: Immediate Joiners Preferred Job Description: We are seeking an experienced Data Engineer with a strong background in Java, Spark, and Scala to join our dynamic team in Hyderabad. The ideal candidate will be responsible for building scalable data pipelines, optimizing data processing workflows, and supporting data-driven solutions for enterprise-grade applications. This is a full-time onsite role. Key Responsibilities: Design, develop, and maintain robust and scalable data processing pipelines. Work with large-scale data using distributed computing technologies like Apache Spark. Develop applications and data integration workflows using Java and Scala. Collaborate with cross-functional teams including Data Scientists, Analysts, and Product Managers. Ensure data quality, integrity, and security in all data engineering solutions. Monitor and troubleshoot performance and data issues in production systems. Must-Have Skills: Strong hands-on experience with Java , Apache Spark , and Scala . Proven experience working on large-scale data processing systems. Solid understanding of distributed systems and performance tuning. Good-to-Have Skills: Experience with Hadoop , Hive , and HDFS . Familiarity with data warehousing concepts and ETL processes. Exposure to cloud data platforms is a plus. Desired Candidate Profile: 5+ years of relevant experience in data engineering or big data technologies. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently in a fast-paced environment. Additional Details: Work Mode: Onsite (Hyderabad) Employment Type: Full-time Notice Period: Immediate joiners highly preferred, candidates serving notice period.

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

We are looking for skilled Hadoop and Google Cloud Platform (GCP) Engineers to join our dynamic team. If you have hands-on experience with Big Data technologies and cloud ecosystems, we want to hear from you! Key Skills: Hadoop Ecosystem (HDFS, MapReduce, YARN, Hive, Spark) Google Cloud Platform (BigQuery, DataProc, Cloud Composer) Data Ingestion & ETL pipelines Strong programming skills (Java, Python, Scala) Experience with real-time data processing (Kafka, Spark Streaming) Why Join Us? Work on cutting-edge Big Data projects Collaborate with a passionate and innovative team Opportunities for growth and learning Interested candidates, please share your updated resume or connect with us directly!

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. We are looking for an Analytics Engineering Manager to grow and lead a small team of 1-3 Analytics Engineers, co-own our emerging analytics engineering practice, and play a critical role in further developing our rapidly growing Data Analytics organization. Partnering with people leaders and ICs within the analytics team and around the business, this role will facilitate the production of datasets used to surface key insights and drive business decision making. This role requires someone with strong analytical and technical skills, and who is comfortable working across multiple functional areas in a fast-paced, challenging environment. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Responsibilities Design and implement highly performant data post-processing pipelines Create shared data assets generating visibility into critical business metrics Partner with data engineering to expose governed datasets to the rest of the organization Make the team more efficient: Research and implement new tooling/processes team-wide; seek out and automate existing costly manual processes Serve as an analytical and technical thought partner to stakeholders on the analytics team and around the business Skills & Experience 2+ years of experience leading a team of analytics, data, or business intelligence engineers 4+ years of experience leading the technical execution of analytics engineering related projects of large scope and complexity Strong proficiency in SQL and experience working with relational databases Proficiency in Python for data manipulation, automation, and analysis Experience managing ETL and data pipeline orchestration with dbt and Airflow Experience with distributed data processing technologies like Hive, Trino (Presto), Spark, BigQuery Experience translating project requirements into a set of technical sub-tasks that build towards a final deliverable Strong knowledge of programming fundamentals, with an emphasis on code simplicity and performance Committed to continuous improvement, with a passion for building processes/tools to make everyone more efficient Committed to contributing to a collaborative, enjoyable, and psychologically safe work environment The ability to effectively collaborate cross-functionally to drive actionable and measurable results A passion for the role of data in helping solve complex questions Experience combining data from disparate data sources to identify insights that were previously unknown A desire to constantly learn and improve themselves Success Measures In 3 months you will have a deep understanding of your team, your stakeholder priorities, and an outline of priorities for the next 12 months In 6 months you will have built out key processes and cemented strong working relationships with your key stakeholders In 12 months you will have delivered on two to three projects that measurably improve analytics at MongoDB To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. **MongoDB is an equal opportunities employer.** Requisition ID 1263096705 Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role Delivery Data Solutions is a horizontal team responsible to transform data @Delivery to meaningful data to support analytics, metrics, power ML models and support KPIs for the domain teams through real time/batch processing. We lead the optimal data resource utilization and data quality for the organization. We provide visibility and standardization of core business metrics powered through the canonical data sets owned by the team. The team is the centre of excellence for data engineering practices across Uber Delivery org. The team creates efficient tools and processes to help people working on data, designs and maintains a holistic view of delivery data, and manages and optimises delivery data infrastructure resources. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Collaborates with team members to develop and maintain data tools and solutions (e.g., pipelines, models, tables) to acquire, process, and store data. This role also designs and develops large-scale data systems (e.g., databases, data warehouses, big data systems), platforms, and infrastructure for various analytics and business applications. Build Data Products for business use cases - batch & real time Metrics development for the analytical needs Optimizations & improvements focussed on optimal resource utlization, improve SLA and adhere to the data quality standards. Mentor the fellow engineers on design & architecture, perform quality code review and design reviews for the data product development. Contribute to the strategic investments both from inception and execution point of view. Consult and advise the product engineering teams on the data engineering practices. Basic Qualifications Bachelor's degree in Computer Science or related technical field or equivalent practical experience Experience coding using general purpose programming language (eg. Java, Python, Go, Javascript, Fusion) Expert in data tech stack eg: Spark, Hive Preferred Qualifications Data warehouse experience Expertise in data tech stack: Spark, Hive Scripting and programming skill Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity Our engineering team develops the Adobe Experience Platform, offering innovative data management and analytics. Developing a reliable, resilient system at large scale is crucial. We use Big Data and open-source tech for Adobe's services. Our support for large enterprise products spans across geographies, requiring us to manage disparate data sources and ingestion mechanisms. The data must be easily accessible at very low latency to support various scenarios and use cases. We seek candidates with deep expertise in building low latency services at high scales who can lead us in accomplishing our vision. What you will need to succeed 8+ years in design and development of data-driven large distributed systems 3+ years as an architect building large-scale data-intensive distributed systems and services Relevant experience building application layers on top of Apache Spark Strong experience with Hive SQL and Presto DB Experience leading architecture designs to approval while collaborating with multiple collaborators, dependencies, and internal/external customer requirements In-depth work experience with open-source technologies like Apache Kafka, Apache Spark, Kubernetes, etc. Experience with big data technologies on public clouds such as Azure, AWS, or Google Cloud Platform Experience with in-memory distributed caches like Redis, Memcached, etc. Strong coding (design patterns) and design proficiencies setting examples for others; contributions to open source are highly desirable Proficiency in data structures and algorithms Cost consciousness around computation and memory requirements Strong verbal and written communication skills BTech/MTech/MS in Computer Science What you'll do Lead the technical design and implementation strategy for major systems and components of the Adobe Experience Platform Evaluate and drive the architecture and technology choices for major systems/components Design, build, and deploy products with outstanding quality Innovate the current system to improve robustness, ease, and convenience Articulate design and code choices to cross-functional teams Mentor and guide a high-performing team Review and provide feedback on features, technology, architecture, design, time & budget estimates, and test strategies Engage in creative problem-solving Develop and evolve engineering standard methodologies to improve the team’s efficiency Partner with other teams across Adobe to achieve common goals Discover what makes Adobe a great place to work: Life @ Adobe Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

FinBox: Where Fintech Meets Fun! Welcome to FinBox, the buzzing hive of tech innovation and creativity! Since our inception in 2017, FinBox has built some of the most advanced technologies in the financial services space that help lenders like Banks, NBFCs and large enterprises build and launch credit products within a matter of days, not months or years. FinBox is a Series A funded company which is expanding globally with offices in India, Vietnam, Indonesia and Philippines. Our vision is to build the best-in-class infrastructure for lending products and help Banks & Financial Services companies across the world scale and launch credit programs that set a new standard in the era of digital finance. So far, we’ve helped our customers disburse Billions of Dollars in credit across unsecured and secured credit including personal loans, working capital loans, business loans, mortgage and education loans. FinBox solutions are already being used by over 100+ companies to deliver credit to over 5 million customers every month. Why Should You Be a FinBoxer Innovative Environment: At FinBox, we foster a culture of creativity and experimentation, encouraging our team to push the boundaries of what's possible in fintech. Impactful Work: Your contributions will directly impact the lives of millions, helping to provide fair and accessible credit to individuals and businesses alike. Growth Opportunities: We are a Series A funded startup and have ample opportunities for growth, professional development and career advancement. Collaborative Culture: Join a diverse and inclusive team of experts who are passionate about making a difference and supporting one another. Who’s a Great FinBoxer At FinBox, we’re on the lookout for exceptional folks who are all about innovation and impact. If you’re excited to shake things up in the banking & financial services world, keep reading! Creative Thinkers: If your brain is always bubbling with out-of-the-box ideas and wild solutions, you’re our kind of person. We love disruptors who challenge the norm and bring fresh perspectives to the table. Customer Heroes: Our customers are our champions, and we need heroes who can understand their needs, deliver magical experiences, and go above and beyond to keep them happy. Team Players: We believe in the power of “we.” If you thrive in a collaborative environment, value different viewpoints, and enjoy being part of a spirited, supportive team, you’ll fit right in. Role Overview As we expand globally, we are looking for a Head of International Business to lead our foray into Southeast Asia and other emerging markets. This is a high-impact leadership role for someone who thrives in ambiguity, is comfortable with consultative and enterprise selling, and has a deep understanding of digital lending ecosystems. Key Responsibilities Market Expansion & Strategy: Drive business expansion in international geographies (SEA – Philippines, Vietnam, Indonesia, etc.), identifying and unlocking high-impact opportunities. Partnership Development: Build strategic relationships with banks, NBFCs, fintechs, and digital financial institutions to drive adoption of our platform. Sales Leadership: Lead enterprise sales cycles end-to-end – from consultative pitching to negotiation and closure with CXOs, risk heads, and digital leaders. Cross-functional Leadership: Work closely with Product, Risk, Legal, and Engineering teams to shape go-to-market strategies tailored to new markets. Product Positioning: Translate complex product capabilities into clear, value-driven propositions that resonate with international enterprise customers. Team Building & Mentoring: Hire, manage, and mentor regional business development and partnerships teams; foster a high-performance culture. Data & Insights: Leverage analytics and market insights to drive decisions, track success metrics, and iterate business strategies. Stakeholder Management: Manage internal and external stakeholders, often across time zones and varied cultural contexts. P&L Ownership: Take full responsibility for the International business unit’s profit and loss – drive sustainable revenue growth, manage costs effectively, and ensure long-term profitability. Qualifications & Experience 6+ years of experience in international business development, strategy, or partnerships, ideally in fintech, credit infrastructure, or financial SaaS. Proven success in building and scaling 0–1 and 1–100 businesses in new geographies. Experience navigating complex regulatory environments and structuring compliant partnerships in emerging markets. Track record of selling to and influencing C-level executives in enterprise contexts. Prior exposure to Southeast Asian markets is strongly preferred. Strong understanding of alternate-data based credit underwriting and digital lending models. Excellent written and verbal communication – strong at creating client-facing documents including pitch decks and product narratives. Comfortable with ambiguity, autonomy, and fast-paced environments typical of scaling startups. MBA from a Tier-1 institution (IIMs, ISB, INSEAD, etc.) is preferred. Show more Show less

Posted 1 week ago

Apply

100.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Entity: Technology Job Family Group: IT&S Group Job Description: You will work with To work digital delivery data group, you will apply your domain knowledge and familiarity with domain data processes to support the organisation. The data team provides daily operational data management, data engineering and analytics support to this organisation across a broad range of activity Let me tell you about the role A data analyst collects, processes, and performs analyses on a variety of datasets. Their key responsibilities include interpreting sophisticated data sets to identify trends and patterns, using analytical tools and methods to generate actionable insights, and crafting visualizations and reports to communicate those insights and recommendations to support decision-making. Data analysts collaborate closely with business domain collaborators to understand their data analysis needs, ensure data accuracy, write and recommend data-driven solutions and tackle value impacting business problems. You might be a good fit for this role if you: have strong domain knowledge in control of work data relevant to permits to work, isolation management and relevant self verification data. have experience in asset based industry or adjacent sectors is desirable Strong analytical skills and demonstrable capability in applying analytical techniques and Python scripting to solve practical problems. are curious, and keen to apply new technologies, trends & methods to improve existing standards and the capabilities of the Subsurface community. are well organized and self-motivated, you balance proactive and reactive approaches and across multiple priorities to complete tasks on time. apply judgment and common sense – you use insight and good judgment to inform actions and respond to situations as they arise. What you will deliver Be a link between asset teams and Technology, combining in-depth understanding of one or more relevant domains with data & analytics skills Provide actionable, data-driven insights by combining deep statistical skills, data manipulation capabilities and business insight. Proactively identify impactful opportunities and autonomously complete data analysis. You apply existing data & analytics strategies relevant to your immediate scope. Clean, pre-process and analyse both structured and unstructured data Develop data visualisations to analyse and interrogate broad datasets (e.g. with tools such as Microsoft PowerBI, Spotfire or similar). Present results to peers and senior management, influencing decision making What you will need to be successful (experience and qualifications) Essential MSc or equivalent experience in a quantitative field, preferably statistics. Strong domain knowledge in control of work data relevant to permits to work, isolation management and relevant self verification data. Hands-on experience carrying out data analytics, data mining and product analytics in sophisticated, fast-paced environments. Applied knowledge of data analytics and data pipelining tools and approaches across all data lifecycle stages. Deep understanding of a few and a high-level understanding of several commonly available statistics approaches. Advanced SQL knowledge. Advanced scripting experience in R or python. Ability to write and maintain moderately complex data pipelines. Customer-centric and pragmatic approach. Focus on value delivery and swift execution, while maintaining attention to detail. Good communication and social skills, with the ability to optimally communicate ideas, expectations, and feedback to team members, collaborators, and customers. Cultivate teamwork and partnership Desired Advanced analytics degree. Experience in asset based industry or adjacent sectors Experience with data technologies (e.g. Hadoop, Hive, and Spark) is a plus. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure 1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2Team ManagementProductivity, efficiency, absenteeism 3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering.

Posted 1 week ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

GlassDoor logo

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: Snowflake Architect Key Responsibilities: • Solution Design: Designing the overall data architecture within Snowflake, including database/schema structures, data flow patterns (ELT/ETL strategies involving Snowflake), and integration points with other systems (source systems, BI tools, data science platforms). • Data Modeling: Designing efficient and scalable physical data models within Snowflake. Defining table structures, distribution/clustering keys, data types, and constraints to optimize storage and query performance. • Security Architecture: Designing the overall security framework, including the RBAC strategy, data masking policies, encryption standards, and how Snowflake security integrates with broader enterprise security policies. • Performance and Scalability Strategy: Designing solutions with performance and scalability in mind. Defining warehouse sizing strategies, query optimization patterns, and best practices for development teams. Ensuring the architecture can handle future growth in data volume and user concurrency. • Cost Optimization Strategy: Designing architectures that are inherently cost-effective. Making strategic choices about data storage, warehouse usage patterns, and feature utilization (e.g., when to use materialized views, streams, tasks). • Technology Evaluation and Selection: Evaluating and recommending specific Snowflake features (e.g., Snowpark, Streams, Tasks, External Functions, Snowpipe) and third-party tools (ETL/ELT, BI, governance) that best fit the requirements. • Standards and Governance: Defining best practices, naming conventions, development guidelines, and governance policies for using Snowflake effectively and consistently across the organization. • Roadmap and Strategy: Aligning the Snowflake data architecture with overall business intelligence and data strategy goals. Planning for future enhancements and platform evolution. • Technical Leadership: Providing guidance and mentorship to developers, data engineers, and administrators working with Snowflake. • Key Skills: • Deep understanding of Snowflake's advanced features and architecture. • Strong data warehousing concepts and data modeling expertise. • Solution architecture and system design skills. • Experience with cloud platforms (AWS, Azure, GCP) and how Snowflake integrates. • Expertise in performance tuning principles and techniques at an architectural level. • Strong understanding of data security principles and implementation patterns. • Knowledge of various data integration patterns (ETL, ELT, Streaming). • Excellent communication and presentation skills to articulate designs to technical and non-technical audiences. • Strategic thinking and planning abilities. Looking for 12+ years of experience to join our team. Skills Snowflake,Data modeling,Cloud platforms,Solution architecture About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 week ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant- Databricks Developer ! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and lead engineers for solutions to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost . Must have worked on both Batch and streaming data pipeline . Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 9, 2025, 9:15:16 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 week ago

Apply

4.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Do you love understanding every detail of how new technologies work? Join the team that serves as Apple’s nerve center, our Information Systems and Technology group. There are countless ways you’ll contribute here, whether you’re coordinating technology needs for product launches, designing music solutions for retail locations, or ensuring the strength of in-store Wi-Fi connections. From Apple Pay to the Apple website to our data centers around the globe, you’ll help design and manage the massive systems that countless employees and customers rely on every day. You’ll also build custom tools for employees, empowering them to solve complex problems on their own. Join our team, and together we’ll explore all the ways to improve how Apple operates, freeing our employees to do what they do best: craft magical experiences for our customers. The people here at Apple don’t just build products - we craft the kind of wonder that’s revolutionized entire industries. It’s the diversity of those people and their ideas that supports the innovation that runs through everything we do, from amazing technology to industry-leading environmental efforts. Join Apple, and help us leave the world better than we found it. The Global Business Intelligence team provides data services, analytics, reporting, and data science solutions to Apple’s business groups, including Retail, iTunes, Marketing, AppleCare, Operations, Finance, and Sales. These solutions are built on top of a phenomenal data platform and leverage multiple frameworks. This position is an extraordinary opportunity for a competent, expert, and results-oriented Framework Software Engineer to define and build some of the best-in-class data platforms and products. Description As a Software Engineer, you will be responsible for building various tools and features for Data and ML platforms, including data processing, insights portal, data observability, data lineage, model hub and data visualization. You will either work on building custom solutions ground up or take open source products and customize the same for Apple’s need. We're looking for an individual who loves to take challenges, tackles problems with imaginative solutions, works well in collaborative teams, and can produce high-quality software under tight deadlines and constraints. This role involves building innovative tools and frameworks that can extend the functionality of 3rd party BI tools using APIs. Minimum Qualifications 4+ years hands on experience with Java, Python or Scala Experience in designing and developing scalable micro services and Rest APIs Experience with SQL and NoSQL data stores Experience in building and deploying cloud native applications/products (AWS/GCP/others) Experience using DevOps tools, containers and Kubernetes platform Good communication and personal skills:- ability to interact and work well with members of other functional groups in a project team and a strong sense of project ownership Preferred Qualifications Knowledge of LLM serving and inference frameworks Knowledge of LangChain/LlamaIndex, enabling RAG applications and LLM orchestration Knowledge of Big data technologies and data platforms Knowledge of spark or other distributed computing frameworks Knowledge of SQL query engines like Trino, Hive etc. Experience in javascript libraries, frameworks such as React is a plus Submit CV

Posted 1 week ago

Apply

5.0 years

7 - 12 Lacs

India

On-site

GlassDoor logo

Dear candidate, We are the hiring partner to one of esteemed clients requiring for below position. Kindly, go through the details before applying. Role : Big Data Administrator Exp- 5+ years Location –Hyderabad (Hybrid) Position Type: Contract (Upto 12 months- extendable) Role: The candidate has a strong technical background in Linux, networking, and security , along with hands-on experience in AWS cloud infrastructure . Proficiency in Infrastructure as Code (Terraform, Ansible) and have managed large-scale BigData clusters (Cloudera, Hortonworks, EMR). Their expertise includes Hadoop Distributed File System (HDFS), YARN, and various Hadoop file formats (ORC, Parquet, Avro). D eep knowledge of Hive, Presto, and Spark compute engines , with the ability to optimize complex SQL queries . They also support Spark with Python (PySpark) and R (SparklyR, SparkR) . Additionally, they have solid coding experience in scripting languages (Shell, Python) and have worked with Data Analysts and Scientists using tools like SAS, R-Studio, JupyterHub, and H2O . Nice-to-have skills include workflow management tools (Airflow, Oozie), analytical libraries (Pandas, Numpy, Scipy, PyTorch), and experience with Packer, Chef, Jenkins . They also have prior knowledge of Active Directory and Windows-based VDI platforms (Citrix, AWS Workspaces). Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹700,000.00 - ₹1,200,000.00 per year Application Question(s): What is your total experience ? How soon you can join ? You understand that this is contract position and your are fine with the same ? What is your current/last salary ? What salary you are expecting now? Work Location: In person

Posted 1 week ago

Apply

175.0 years

3 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. With a focus on digitization, innovation, and analytics, the Enterprise Digital teams creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as well as long-term transformation of our business. A unique aspect of Enterprise Digital Teams is the integration of diverse skills across all its remit. Enterprise Digital Teams has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Experimentation & Analytics (EDEA) leads the Enterprise Product Analytics and Experimentation charter for Brand & Performance Marketing and Digital Acquisition & Membership experiences as well as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels & evolve our digital experiences with actionable insights & analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. How will you make an impact in this role? This role will report to the Manager of Paid Search Analytics team and will be based in Gurgaon. The candidate will be responsible for delivery of highly impactful analytics to optimize our performance marketing channels. Deliver strategic analytics focused on Performance Marketing channels Define and build key KPIs to monitor the channel/product health and success Support the development of new products and capabilities Deliver read out of campaigns uncovering insights and learnings that can be utilized to further optimize the channels Gain deep functional understanding of the enterprise-wide product capabilities and associated platforms over time and ensure analytical insights are relevant and actionable Power in-depth strategic analysis and provide analytical and decision support by mining digital activity data along with AXP closed loop data Minimum Qualifications Advanced degree in a quantitative field (e.g. Finance, Engineering, Mathematics, Computer Science) Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. Experience in large data processing and handling, understanding in data science is a plus. Ability to work in a dynamic, cross-functional environment, with strong attention to detail. Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications Prior experience in performance marketing will be preferred Experience in building ML based predictive models for marketing treatments We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Overview We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We currently have 2500+ awesome colleagues (in Annalect India) who are committed to solve our clients’ pressing business issues. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together. Responsibilities Support Data Quality Monitoring processes across different data sources (digital and offline channels) Report scheduling, export and manipulation across various platforms Support tasks for automating data processes Engage with Agency internal teams to flag issues with the data they've produced Hive Weekly Project updates Qualifications Basic understanding and proficiency in Python, SQL, Power BI . Overall experience more than 2 years. Experience on Python, SQL, Power BI experience is mandatory. Alteryx, Media Knowledge would be beneficial. Knowledge on business intelligence, data visualization tools would be beneficial. Capable of independently communicating and handling stakeholders. Experience in writing complex SQL query is good to have. Capability of integrating data from numerous sources into a single system. Strong debugging, problem solving, and investigative skills. Code management, versioning and day to day tasks management. Good communication and organization skills, with a logical approach to problem solving, good time management, and task prioritization skills. Good to have: Experience in any visualisation tool. Good to have: Experience in a customer-facing role involving engagement with customers and internal stakeholders Show more Show less

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies