Home
Jobs

2453 Hive Jobs - Page 34

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Job Location- Kolkata (Hybrid) Experience Level - 5+ Years Mandatory Skills -Azure Databricks +SQL +Pyspark Primary Roles and Responsibilities : Developing Modern Data Warehouse solutions using Databricks and Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications : Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 5+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL , Python and Spark (PySpark) Candidate must have experience in Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Summary Responsible for Building and maintaining high-performance data systems that enable deeper insights for all parts of our organization Responsible for Developing ETL/ELT pipelines for both batch and streaming data Responsible for Data flow for the real-time and analytics Improving data pipelines performance by implementing the industry’s best practices and different techniques for data parallel processing Responsible for the documentation, design, development and testing of Hadoop reporting and analytical application. Responsible for Technical discussion and finalization of the requirement by communicating effectively with Stakeholder. Responsible for converting functional requirements into the detailed technical design Responsible for adhering to SCRUM timelines and deliver accordingly Responsible for preparing the Unit/SIT/UAT test cases and log the results Responsible for Planning and tracking the implementation to closure Ability to drive enterprise-wide initiatives for usage of external data Envision enterprise-wide Entitlement’s platform and align it with Bank’s NextGen technology vision. Continually looking for process improvements Coordinate between various technical teams for various systems for smooth project execution starting from technical requirements discussion, overall architecture design, technical solution discussions, build, unit testing, regression testing, system integration testing, user acceptance testing, go live, user verification testing and rollback [if required] Prepare technical plan with clear milestone dates for technical tasks which will be input to the PM’s overall project plan. Coordinate with technical teams across technology on need basis who are not directly involved in the project example: Firewall network teams, DataPower teams, EDMP , OAM, OIM, ITSC , GIS teams etc. Responsible to support change management process Responsible to work alongside PSS teams and ensure proper KT sessions are provided to the support teams. Ensure to identify any risks within the project and get that recorded in Risk wise after discussion with business and manager. Ensure the project delivery is seamless with zero to negligible defects. Key Responsibilities Hands on experience with C++, .Net, SQL Language, jQuery, Web API & Service, Postgres SQL & MS SQL server, Azure Dev Ops & related, GitHub, ADO CI/CD Pipeline Should be transversal to handle Linux, PowerShell, Unix shell scripting, Kafka, Spark streaming Hadoop – Hive, Spark, Python, PYSpark Hands on experience of workflow/schedulers like NIFI/Ctrl-m Experience with Data loading tools like sqoop Experience and understanding of Object-oriented programming Motivation to learn innovative trade of programming, debugging, and deploying Self-starter, with excellent self-study skills and growth aspirations, capable of working without direction and able to deliver technical projects from scratch Excellent written and verbal communication skills. Flexible attitude, perform under pressure Ability to lead and influence direction and strategy of technology organization Test driven development, commitment to quality and a thorough approach to work A good team player with ability to meet tight deadlines in a fast-paced environment Guide junior’s developers and share the best practices Having Cloud certification will be an added advantage: any one of Azure/Aws/GCP Must have Knowledge & understanding of Agile principles Must have good understanding of project life cycle Must have Sound problem analysis and resolution abilities Good understanding of External & Internal Data Management & implications of Cloud usage in context of external data Strategy Develop the strategic direction and roadmap for CRES TTO, aligning with Business Strategy, ITO Strategy and investment priorities. Business Work hand in hand with Product Owners, Business Stakeholders, Squad Leads, CRES TTO partners taking product programs from investment decisions into design, specifications, solutioning, development, implementation and hand-over to operations, securing support and collaboration from other SCB teams Ensure delivery to business meeting time, cost and high quality constraints Support respective businesses in growing Return on investment, commercialisation of capabilities, bid teams, monitoring of usage, improving client experience, enhancing operations and addressing defects & continuous improvement of systems Thrive an ecosystem of innovation and enabling business through technology Governance Promote an environment where compliance with internal control functions and the external regulatory framework People & Talent Ability to work with other developers and assist junior team members. Identify training needs and take action to ensure company-wide compliance. Pursue continuing education on new solutions, technology, and skills. Problem solving with other team members in the project. Risk Management Interpreting briefs to create high-quality coding that functions according to specifications. Key stakeholders CRES Domain Clients Functions MT members, Operations and COO ITO engineering, build and run teams Architecture and Technology Support teams Supply Chain Management, Risk, Legal, Compliance and Audit teams External vendors Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Other Responsibilities Embed Here for good and Group’s brand and values in team Perform other responsibilities assigned under Group, Country, Business or Functional policies and procedures Multiple functions (double hats) Skills And Experience Technical Project Delivery (Agile & Classic) Vendor Management Stakeholder Management Qualifications 5+ years of lead development role Should have managed a team of minimum 5 members Should have delivered multiple projects end to end Experience in Property Technology products (eg. Lenel, CBRE, Milestone etc) Strong analytical, numerical and problem-solving skills Should be able to understand and communicate technical details of the project Good communication skills – oral and written. Very good exposure to technical projects Eg: server maintenance, system administrator or development or implementation experience Effective interpersonal, relational skills to be able to coach and develop the team to deliver their best Certified Scrum Master About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less

Posted 1 week ago

Apply

170.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Area(s) of responsibility Empowered By Innovation Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Role: Lead Data Engineer -AWS Location: Bangalore /Chennai Experience: 5 – 7 Years Job Profile Provide estimates for requirements, analyses and develop as per the requirement. Developing and maintaining data pipelines and ETL (Extract, Transform, Load) processes to extract data efficiently and reliably from various sources, transform it into a usable format, and load it into the appropriate data repositories. Creating and maintaining logical and physical data models that align with the organization's data architecture and business needs. This includes defining data schemas, tables, relationships, and indexing strategies for optimal data retrieval and analysis. Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. Collaborate with downstream application to understand their needs and build the data storage and optimize as per their need. Working closely with other stakeholders and Business to understand data requirements and translate them into technical solutions. Familiar with Agile methodologies and have prior experience working with Agile teams using Scrum/Kanban Lead Technical discussions with customers to find the best possible solutions. Proactively identify and implement opportunities to automate tasks and develop reusable frameworks. Optimizing data pipelines to improve performance and cost, while ensuring a high quality of data within the data lake. Monitoring services and jobs for cost and performance, ensuring continual operations of data pipelines, and fixing of defects. Constantly looking for opportunities to optimize data pipelines to improve performance Must Have Hand on Expertise of 4- 5 years in AWS services like S3, Lambda, Glue, Athena, RDS, Step functions, SNS, SQS, API Gateway, Security, Access and Role permissions, Logging and monitoring Services. Good hand on knowledge on Python, Spark, Hive and Unix, AWS CLI Prior experience in working with streaming solution like Kafka . Prior experience in implementing different file storage types like Delta-lake / Ice-berg. Excellent knowledge in Data modeling and Designing ETL pipeline. Must have strong knowledge in using different databases such as MySQL, Oracle and Writing complex queries. Strong experience working in a continuous integration and Deployment process. Pyspark, AWS ,SQL, Kafka Nice To Have Hand on experience in the Terraform, GIT, GIT Actions. CICD pipeline and Amazon Q. Terraform, GIT, GIT Actions. CICD pipeline , AI Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Senior Data Engineer with expertise in Databricks and Azure to join our team. As a Senior Data Engineer, you will be responsible for designing, developing and maintaining our data lakehouse and pipelines. You will work closely with the Data & Analytics teams to ensure efficient data flow and enable data-driven decision-making. The ideal candidate will have a strong background in data engineering, experience with Databricks, Azure Data Factory and other Azure services and a passion for working with large-scale data sets. Role Description Design, develop and maintain the solutions required for data processing, storage and retrieval. Create scalable, reliable and efficient data pipelines that enable data developers and engineers, data analysts and business stakeholders to access and analyze large volumes of data. Closely collaborates with other team members and Product Owner. Job Requirements Key Responsibilities Collaborate with the Product Owner, Business analyst and other team members to understand requirements and design scalable data pipelines and architectures. Build and maintain data ingestion, transformation and storage processes using Databricks and Azure services. Develop efficient ETL/ELT workflows to extract, transform and load data from various sources into data lakes. Design solutions and drive implementation for enhancing, improving and securing Data Lakehouse. Optimize and fine-tune data pipelines for performance, reliability and scalability. Implement data quality checks and monitoring to ensure data accuracy and integrity. Work with data developers, engineers and data analysts to provide them with the necessary data infrastructure and tools for analysis and reporting. Troubleshoot and resolve data-related issues, including performance bottlenecks and data inconsistencies. Stay up to date with the latest trends and technologies in data engineering and recommend improvements to existing systems and processes. Skillset Highly self-motivated, work Independently, assume ownership and results oriented. A desire and interest to stay up to date with the latest changes in Databricks, Azure and related data platform technologies. Time-management skills and the ability to establish reasonable and attainable deadlines for resolution . Strong programming skills in languages such as SQL, Python, Scala or Spark. Experience working with Databricks and Azure services, such as Azure Data Lake Storage, Azure Data Factory, Azure Databricks, Azure SQL Database and Azure Synapse Analytics. Proficiency in data modeling, database design and Spark SQL query optimization. Familiarity with big data technologies and frameworks like Hadoop, Spark and Hive. Familiarity with data governance and security best practices. Knowledge of data integration patterns and tools. Understanding of cloud computing concepts and distributed computing principles. Excellent problem-solving and analytical skills. Strong communication and collaboration skills to work effectively in an agile team environment. Ability to handle multiple tasks and prioritize work in a fast-paced and dynamic environment. Qualifications Bachelor's degree in Computer Science, Engineering or a related field. 4+ years of proven experience as a Data Engineer, with a focus on designing and building data pipelines. Experience in working with big and complex data environments. Certifications in Databricks or Azure services is a plus. Experience with data streaming technologies such as Apache Kafka or Azure Event Hubs is a plus. Company description Here at SoftwareOne, we give you the flexibility to unleash your creativity, without limits. We encourage autonomy and thinking outside the box - and we can’t wait to hear your new ideas., and although all businesses say it, we truly believe in work - life harmony. Our people are our greatest asset, and we’ll go the extra mile to ensure you’re happy here. We want our people to be their true authentic selves at all times, because that’s when real creativity happens. At SoftwareOne, we believe that our people are our greatest asset. We offer: A flexible work environment that encourages creativity and innovation. Opportunities for professional growth and development. An inclusive team culture where your ideas are valued and your contributions make a difference. The chance to work on ambitious projects that push the boundaries of technology. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Head of Application Development & Support Job Requisition: R0080416 No. of Vacancies: 1 Location: Pune Full time /Part Time: Full time Regular /Temporary: Regular SANDVIK COROMANT is the world’s leading supplier of tools, tooling solutions and know-how to the metalworking industry. With extensive investments in research and development we create unique innovations and set new productivity standards together with our customers. These include the world's major automotive, aerospace and energy industries. Sandvik Coromant has 8,000 employees and is represented in 130 countries. We are part of the business area Sandvik Machining Solutions within the global industrial group Sandvik. At Sandvik Coromant, we are driven by a passion for excellence in everything we do. Our belief is that sustainable success is a team effort and with our profound knowledge of metal cutting and insight into the varying challenges of different industries, we strive to develop innovative solutions in collaboration with our customers, to meet both current and future demands. We are seeking for people who are passionate in their work and possess the drive to excel to join us. Purpose: As a Head of Application Development and Support is a global role where you would be responsible for developing , managing and enhancing ‘digital solutions/applications curated by DIH members or your team members. You are responsible for driving end-to-end software/application delivery, ensuring the quality and speed of execution across web and mobile platforms. Leveraging and institutionalizing agile way of working, the Head of Application Development and Support will understand business logic, guide application development team and oversee the software/digital solutions development lifecycle. You will own and Implement industry best practices, and create sustainable development and support processes eventually leading application development team from India. Additionally, this role will focus on hiring, developing, and motivating talent while being a hands-on technical leader who can engage in detailed problem-solving. Main Responsibilities: Collaborate with stakeholders to define and execute software development goals, ensuring alignment with the company’s digital strategy Lead the timely and high-quality execution of the digital applications’ portfolio by leveraging internal and external resources Design user interfaces and implement front-end components using HTML, CSS, and JavaScript frameworks such as React or Angular. Develop server-side logic and database integration using languages such as Node.js, Python, or Java. Collaborate with designers, product managers, and other stakeholders to define project requirements and deliverables. Write clean, efficient, and maintainable code following industry best practices. Perform code reviews and provide constructive feedback to team members. Troubleshoot and debug issues reported by clients or internal stakeholders. Stay updated on emerging technologies and trends in web development Continuously refine and implement scalable processes for software development, deployment, and support Use structured frameworks like scrum methodologies to ensure cross-functional engagement and delivery accountability Work with agile development methodologies, adhering to best practices and pursuing continued learning opportunities Identify skill gaps and address them through targeted hiring, strategic partnerships, and upskilling initiatives Actively develop and motivate team members by providing real-time coaching, assigning developmental projects, and fostering career growth Ensure that global digital initiatives improve the customer experience and drive the adoption of digital solutions Collaborate effectively with cross-functional teams like Corporate IT, Cyber Security, Data and AI teams, and Digital platform product owners, Commercial, and Operational stakeholders, to deliver high-impact projects Act as a technical authority, providing guidance on architecture, design, and implementation Help with application feasibility analysis and building uses cases related to software development and test new digital applications/solutions, processes and operational changes that will improve productivity and end user experience Working with the team to develop intelligent dashboards, reporting, and analysis tools Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design Conduct usability testing and gather feedback from users to continuously improve the user experience Stay updated on the latest trends and technologies in software development, Full stack development, Database management, UI/UX design etc. Key Competencies: Master or bachelor’s degree in computer science, Software Engineering, mathematics or similar fields. 10 to 15 years of experience in leading and managing large and multi-disciplinary software /applications/digital solutions team in global setup. Hand-on experience in application/software development 5+ experience in managerial/team management role Experience of working in a cross functional team with global set up Experience in setting up agile way of working and mentoring team on agile/scrum methodology Experience in delivering multi-stack applications for different industry verticals Software Development: Understanding of various programming languages and software development methodologies Database Management: Understanding database systems to manage and organize digital assets effectively. SQL, Oracle Database Security: Understanding of cybersecurity principles to safeguard digital assets from threats and vulnerabilities Integration: Develop the ability to support integration of different systems and solutions within the catalogue to ensure interoperability. Basic understanding in data visualization, data modelling and data analysis (preferably Power Bi) Basic understanding in data engineering (non-drag and drop ETL, data wrangling, data quality, warehousing, etc.) Good understanding of software development project management tools such as DevOps, Jira, Kanban, Gantt Charts, Miro Good understanding of different phases of web applications such as concepts, development, testing, deployment and maintenance Conceptual knowledge on open source/open standards big data technologies, e.g.: Hadoop, Spark, Hive, HBase, Cassandra, Drill, Databricks, EMR/HDInsight, etc. Knowledge of streaming data technology and uses: Kafka/Kinesis, Confluent Platform, Flink, Samza, Spark Streaming, Druid, Elasticsearch, etc. would be an added advantage Stakeholder Management: Ability to communicate effectively with stakeholders, including developers, users, and management, to understand requirements and gather feedback Training and Support: Skill in providing training and support to users of the digital solutions within the catalogue Benefits: Sandvik offers a competitive total compensation package including comprehensive benefits. In addition, we provide opportunities for professional competence development and training, as well as opportunities for career advancement. How to apply: You may upload your updated profile in Workday against JR Number R0080416 through your login, no later than June 27, 2025 Or Please send your application by registering on our site www.sandvik.com/career and uploading your CV against the JR Number R0080416 by June 27, 2025. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Details 1Role -Senior Developer 2Required Technical Skill Set - Spark/Scala/Unix 3Desired Experience Range -5-8 years 4Location of Requirement - Pune Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) Minimum 4+ years of experience in development of Spark Scala Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, MapReduce, Sqoop Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc Experience in debugging the Spark code Working knowledge of basic UNIX commands and shell script Experience of Autosys, Gradle Good-to-Have Good analytical and debugging skills Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status Write clear and precise documentation / specification Work in an agile environment Create documentation and document all developed mappings SN Responsibility of / Expectations from the Role 1 Create Scala/Spark jobs for data transformation and aggregation 2 Produce unit tests for Spark transformations and helper methods 3 Write Scaladoc-style documentation with all code 4 Design data processing pipelines Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position-Azure Data Engineer Location- Hyderabad Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Interview Date- 13-June-25 Interview Mode- Virtual Drive Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

Basavanagudi, Bengaluru, Karnataka

On-site

Indeed logo

We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description To ensure that Visa’s payment technology is truly available to everyone, everywhere requires the success of our key bank or merchant partners and internal business units. The Global Data Science group supports these partners by using our extraordinarily rich data set that spans more than 3 billion cards globally and captures more than 100 billion transactions in a single year. Our focus lies on building creative solutions that have an immediate impact on the business of our highly analytical partners. We work in complementary teams comprising members from Data Science and various groups at Visa. To support our rapidly growing group we are looking for Data Scientists who are equally passionate about the opportunity to use Visa’s rich data to tackle meaningful business problems. You will join one of the Data Science focus areas (e.g., banks, merchants & retailers, digital products, marketing) with an opportunity for rotation within Data Science to gain broad exposure to Visa’s business. The role will be based in Bengaluru, India Essential Functions Be an out-of-the-box thinker who is passionate about brainstorming innovative ways to use our unique data to answer business problems Communicate with clients to understand the challenges they face and convince them with data Extract and understand data to form an opinion on how to best help our clients and derive relevant insights Develop visualizations to make your complex analyses accessible to a broad audience Find opportunities to craft products out of analyses that are suitable for multiple clients Work with stakeholders throughout the organization to identify opportunities for leveraging Visa data to drive business solutions. Mine and analyze data from company databases to drive optimization and improvement of product, marketing techniques and business strategies for Visa and its clients Assess the effectiveness and accuracy of new data sources and data gathering techniques. Develop custom data models and algorithms to apply to data sets. Use predictive modeling to increase and optimize customer experiences, revenue generation, data insights, advertising targeting and other business outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. This is a hybrid position. Expectation of days in office will be confirmed by your Hiring Manager. Qualifications Basic Qualifications • Bachelor’s or Master’s degree in Statistics, Operations Research, Applied Mathematics, Economics, Data Science, Business Analytics, Computer Science, or a related technical field • 5+ years of work experience with a bachelor’s degree or 2+ years’ experience with an advance degree (e.g., Master’s or MBA) • Analyzing large data sets using programming languages such as Python, R, SQL and/or Spark • Developing and refining machine learning models for predictive analytics, classification and regression tasks. Preferred Qualifications • 5+ years’ experience in data-based decision-making or quantitative analysis • Knowledge of ETL pipelines in Spark, Python, HIVE that process transaction and account level data and standardize data fields across various data sources • Generating and visualizing data-based insights in software such as Tableau • Competence in Excel, PowerPoint • Previous exposure to financial services, credit cards or merchant analytics is a plus Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less

Posted 1 week ago

Apply

0.0 - 18.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 1 week ago

Apply

0.0 - 6.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Noida, Uttar Pradesh, India Qualification : Pre-Sales Solution Engineer - India Experience areas or Skills : Pre-Sales experience of Software or analytics products Excellent verbal & written communication skills OLAP tools or Microsoft Analysis services (MSAS) Data engineering or Data warehouse or ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Tableau or Micro strategy or any BI tool Hive QL or Spark SQL or PLSQL or TSQL Writing and troubleshooting SQL programming or MDX queries Working on Linux programming in Python, Java or Java Script would be a plus Filling RFP or Questioner from Customer NDA, Success Criteria, Project closure and other Documentation Be willing to travel or relocate as per requirement Role : Acts as main point of contact for Customer contacts involved in the evaluation process Product demonstrations to qualified leads Product demonstrations in support of marketing activity such as events or webinars Own RFP, NDA, PoC success criteria document, POC Closure and other documents Secures alignment on Process and documents with the customer / prospect Owns the technical win phases of all active opportunities Understand Customer domain and database schema Providing OLAP and Reporting solution Work closely with customers for understanding and resolving environment or OLAP cube or reporting related issues Co-ordinate with solutioning team for execution of PoC as per success plan Creates enhancement requests or identify requests for new features on behalf of customers or hot prospects Experience : 3 to 6 years Job Reference Number : 10771

Posted 1 week ago

Apply

0.0 - 20.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Pune, Maharashtra, India;Hyderabad, Telangana, India;Noida, Uttar Pradesh, India Qualification : 15+ years of experience in the role of managing and implementing of high-end software products. Expertise in Java/ J2EE or EDW/SQL OR Hadoop/Hive/Spark and preferably hands-on. Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have Managed/ delivered and implemented complex projects dealing with considerable data size (TB/ PB) and with high complexity Experience in handling migration projects Good to have: Data Ingestion, Processing and Orchestration knowledge Skills Required : Java Architecture, Big Data, Cloud Technologies Role : Senior Technical Project Managers (STPMs) are in charge of handling all aspects of technical projects. This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. You should collaborate with, and leverage, colleagues in business development, product management, analytics, marketing, engineering, and partner organizations. You have to manage multiple projects and ensures all releases on time. You are responsible for manage and deliver the technical solution to support an organization’s vision and strategic direction. The technology program manager delivers the technical solution to support an organization’s vision and strategic direction. You should be capable to working with a different type of customer and should possess good customer handling skills. Experience in working in ODC model and capable of presenting the Technical Design and Architecture to Senior Technical stakeholders. Should have experience in defining the project and delivery plan for each assignment Capable of doing resource allocations as per the requirements for each assignment Should have experience of driving RFPs. Should have experience of Account management – Revenue Forecasting, Invoicing, SOW creation etc. Experience : 15 to 20 years Job Reference Number : 13010

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Chennai, Tamil Nadu, India Qualification : Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and end engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Skills Required : Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Role : Skills: Bigdata,Pyspark,Python ,Hadoop / HDFS; Spark; Good to have : GCP Roles/Responsibilities: Develops and maintains scalable data pipelines to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Writes unit/integration tests, contributes to engineering wiki, and documents work. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with a team of frontend and end engineers, product managers, and analysts. Defines company data assets (data models), spark, sparkSQL, and hiveSQL jobs to populate data models. Designs data integrations and data quality framework. Basic Qualifications: BS or MS degree in Computer Science or a related technical field 4+ years of SQL experience (No-SQL experience is a plus) 4+ years of experience with schema design and dimensional data modelling 4+ years of experience with Big Data Technologies like Spark, Hive 2+ years of experience on data engineering on Google Cloud platform services like big query. Experience : 4 to 7 years Job Reference Number : 12907

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : 5-7 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Skills Required : Python, Pyspark, AWS Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 8 to 10 years Job Reference Number : 13025

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India;Hyderabad, Telangana, India;Gurgaon, Haryana, India Qualification : Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role : Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience : 5 to 10 years Job Reference Number : 11078

Posted 1 week ago

Apply

0.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India Qualification : Do you love to work on bleeding-edge Big Data technologies, do you want to work with the best minds in the industry, and create high-performance scalable solutions? Do you want to be part of the team that is solutioning next-gen data platforms? Then this is the place for you. You want to architect and deliver solutions involving data engineering on a Petabyte scale of data, that solve complex business problems Impetus is looking for a Big Data Developer that loves solving complex problems, and architects and delivering scalable solutions across a full spectrum of technologies. Experience in providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Hive, etc. Should be able to communicate with the customer in the functional and technical aspects Expert-level proficiency in Python/Pyspark Hands-on experience with Shell/Bash Scripting (creating, and modifying scripting files) Control-M, AutoSys, Any job scheduler experience Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies). Should be able to guide the team for any functional and technical issues Strong technical development experience in effectively writing code, code reviews, and best practices code refactoring. Passionate for continuous learning, experimenting, ing and contributing towards cutting-edge open-source technologies and software paradigms Good communication, problem-solving & interpersonal skills. Self-starter & resourceful personality with the ability to manage pressure situations. Capable of providing the design and Architecture for typical business problems. Exposure and awareness of complete PDLC/SDLC. Out of box thinker and not just limited to the work done in the projects. Must Have Experience with AWS(EMR, Glue, S3, RDS, Redshift, Glue) Cloud Certification Skills Required : AWS, Pyspark, Spark Role : valuate and recommend the Big Data technology stack best suited for customer needs. Design/ Architect/ Implement various solutions arising out of high concurrency systems Responsible for timely and quality deliveries Anticipate on technological evolutions Ensure the technical directions and choices. Develop efficient ETL pipelines through spark or Hive. Drive significant technology initiatives end to end and across multiple layers of architecture Provides strong technical leadership in adopting and contributing to open-source technologies related to Big Data across multiple engagements Designing /architecting complex, highly available, distributed, failsafe compute systems dealing with a considerable amount (GB/TB) of data Identify and work on incorporating Non-functional requirements into the solution (Performance, scalability, monitoring etc.) Experience : 8 to 12 years Job Reference Number : 12400

Posted 1 week ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 30-Jun-2025 About the role The Data Analyst in the GRP team will be responsible to analyse complex datasets and make it consumable using visual storytelling and visualization tools such as reports and dashboards built using approved tools (Tableau, Microstrategy, PyDash). The ideal candidate will have a strong analytical mindset, excellent communication skills, and a deep understanding of reporting tools front end and back end What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Driving Data analysis for testing key business hypothesis and asks, developing complex visualizations, self-service tools and cockpits for answering recurring business asks and measurements Experience in handling quick turnaround business requests, managing stakeholder communication and solving business asks holistically going beyond the basic stakeholder asks Ability to select the right tools and techniques for solving the problem in hand Ensuring analysis, tools/ dashboards are developed with the right technical rigor meeting Tesco technical standards Applied experience in handling large data-systems and datasets Extensive experience in handling high volume, time pressured business asks and ad-hocs requests Ability to develop production ready visualization solutions and automated reports Contribute to development of knowledge assets and reusable modules on GitHub/Wiki Come up with new ideas and analysis to support business priorities and solve business problems You will need 5-8 years of experience as a Data Analyst, with experience working in domains like retail, cpg and for one of the following functional areas – Finacne, marketing, supply chain, customer, merchandising preferred Proven track record of handling ad-hoc analysis, developing dashboards and visualizations based business asks. Strong usage of business understanding for analysis asks. Exposure to analysis work within Retail domain; Space, Range, Merchandising, Store Ops, Forecasting, Customer Insights, Digital, Marketing will be preferred Expert Skills to analyze large datasets using Adv Excel, Adv SQL, Hive, Phython, Expert Skills to develop visualizations, self-service dashboards and reports using Tableau & PowerBi, Statistical Concepts (Correlation Analysis and Hyp. Testing), Strong DW concepts (Hadoop, Teradata), Excellent analytical and problem-solving skills. Should be comfortable dealing with variability Strong communication and interpersonal skills. About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation

Posted 1 week ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore, Karnataka, India;Gurgaon, Haryana, India;Indore, Madhya Pradesh, India Qualification : Job Title: Java + Bigdata Engineer Company Name: Impetus Technologies Job Description: Impetus Technologies is seeking a skilled Java + Bigdata Engineer to join our dynamic team. The ideal candidate will possess strong expertise in Java programming and have hands-on experience with Bigdata technologies. Responsibilities: Design, develop, and maintain robust big data applications using Java and related technologies. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Optimize application performance and scalability to handle large data sets effectively. Implement data processing solutions using frameworks such as Apache Hadoop, Apache Spark, or similar tools. Participate in code reviews, debugging, and troubleshooting of applications to ensure high-quality code standards. Stay updated with the latest trends and advancements in big data technologies and Java developments. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Strong proficiency in Java programming and experience with object-oriented design principles. Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, or similar frameworks. Familiarity with cloud platforms and data storage solutions (AWS, Azure, etc.). Excellent problem-solving skills and a proactive approach to resolving technical challenges. Strong communication and interpersonal skills, with the ability to work collaboratively in a team-oriented environment. At Impetus Technologies, we value innovation and encourage our employees to push boundaries. If you are a passionate Java + Bigdata Engineer looking to take your career to the next level, we invite you to and be part of our growing team. Skills Required : Java, spark, pyspark, Hive, microservices Role : Job Title: Java + Bigdata Engineer Company Name: Impetus Technologies Roles and Responsibilities: Design, develop, and maintain scalable applications using Java and Big Data technologies. Collaborate with cross-functional teams to gather requirements and understand project specifications. Implement data processing and analytics solutions leveraging frameworks such as Apache Hadoop, Apache Spark, and others. Optimize application performance and ensure data integrity throughout the data lifecycle. Conduct code reviews and implement best practices to enhance code quality and maintainability. Troubleshoot and resolve issues related to application performance and data processing. Develop and maintain technical documentation related to application architecture, design, and deployment. Stay updated with industry trends and emerging technologies in Java and Big Data ecosystems. Participate in Agile development processes including sprint planning, log grooming, and daily stand-ups. Mentor junior engineers and provide technical guidance to ensure successful project delivery. Experience : 4 to 7 years Job Reference Number : 13044

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Blend at a glance: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com Job Description We are seeking a highly skilled Data Scientist/Analyst with expertise in Media Mix Modeling (MMM) to join our dynamic analytics team. The ideal candidate will play a critical role in providing actionable insights and optimizing marketing spend across various channels by leveraging statistical models and data-driven techniques. Key Responsibilities Develop and implement Media Mix Models to optimize marketing spend across different channels (e.g., TV, digital, radio, print, etc.). Analyze historical data to understand the impact of marketing efforts and determine the effectiveness of different media channels. Collaborate with marketing and business teams to translate business objectives into quantitative analyses and actionable insights. Build predictive models to forecast the impact of future marketing activities and recommend budget allocation. Present and communicate complex findings in a clear, concise, and actionable manner to both technical and non-technical stakeholders. Perform deep-dive analyses of marketing campaigns and customer data to identify trends, opportunities, and areas for improvement. Ensure data integrity, accuracy, and consistency in all analyses and models. Stay up-to-date with the latest trends and advancements in media mix modeling, marketing analytics, and data science. Collaborate with cross-functional teams including Data Engineering, Marketing, and Business Intelligence to ensure seamless data flow and integration. Create and maintain documentation for all models, methodologies, and analysis processes. Qualifications Bachelor’s or Master’s degree in Data Science, Statistics, Economics, Mathematics, or a related field. Proven experience (6+ years) working in Media Mix Modeling (MMM) and/or marketing analytics. Strong proficiency in statistical modeling techniques (e.g., regression analysis, time-series modeling) and data analysis. Hands-on experience with tools and technologies such as Python, R, SQL, and data visualization platforms (e.g., Tableau, Power BI). Familiarity with marketing data sources (e.g., Nielsen, IRI, social media data, CRM, etc.). Excellent problem-solving skills and a strong analytical mindset. Ability to translate complex data into actionable insights and recommendations for business stakeholders. Strong communication skills with the ability to present findings to both technical and non-technical audiences. Experience working in a fast-paced, data-driven environment. Familiarity with machine learning techniques and frameworks is a plus. Preferred Qualifications Experience working with large datasets and cloud-based data platforms (e.g., AWS, Azure, Google Cloud). Knowledge of marketing attribution models, customer segmentation, and lifetime value (LTV) analysis. Experience in running A/B tests and controlled experiments. Prior experience in a consulting or marketing agency environment is a plus. Additional Information What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks: Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats: Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards: We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications. Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us Zupee is India’s fastest growing innovator in real money gaming with a focus on skill based games on mobile platform. Started by 2 IIT-K alumni in 2018, we are backed by marquee global investors such as WestCap Group, Tomales Bay Capital, Matrix Partners, Falcon Edge, Orios Ventures & Smile Group with an aspiration to become the most trusted and responsible entertainment company in the world. To know more about our recent funding coverage: https://bit.ly/3AHmSL3 Our focus has been on innovating in the board, strategy and casual games sub-genres. We innovate to ensure our games provide an intersection between skill and entertainment, enabling our users to earn while they play. Role – Data Engineering We are looking for someone to develop the next generation of our Data platform collaborating across functions like product, marketing, design, innovation/growth, strategy/scale, customer experience, data science & analytics and technology. Core Responsibilities ● Understand, implement and automate ETL and data pipelines with up-to-date industry standards ● Hands-on involvement in the design, development and implementation of optimal and scalable AWS services What are we looking for? ● S/he must have experience in Python ● S/he must have experience in Big Data – Spark, Hadoop, Hive, HBase and Presto ● S/he must have experience in Data Warehousing ● S/he must have experience in building reliable and scalable ETL pipelines Qualifications and Skills ● 2-4 years of professional experience in data engineering profile ● BS or MS in Computer Science or similar Engineering stream ● Hands-on experience in data warehousing tools ● Knowledge of distributed systems such as Hadoop, Hive, Spark and Kafka etc. ● Experience with AWS services (EC2, RDS, S3, Athena, data pipeline/glue, lambda, dynamodb etc. Show more Show less

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Job Description As a Manager/Sr Manager of Technical Support at Alation you will lead the day to day operations of a team of Technical Support Engineers. You are leading a customer-facing team as a key leader in the customer success organization. You will be responsible for directly monitoring, reporting, and driving improvements to team-level metrics and KPIs, acting as an escalation point with customers and internal teams, and optimizing and developing support processes and tools. Your work will be cross-functional and will involve working with engineering, QA, DevOps, product management, and sales. Location is Chennai (Hybrid Model). What You’ll Do Manage a team of senior-level Technical Support Engineers Develop capacity forecasts and resource allocation models to ensure proper coverage Drive the scaling, onboarding, and ongoing specialization of the team Implementing innovative process to increase support efficiency and increasing overall customer satisfaction Handle customer escalations and assist with troubleshooting and triaging incidents Manage the backlog and ensure that Support SLAs and KPIs are met Partner with Engineering & Product to prioritize issues and product improvements You Should Have 10-15 years of enterprise application support or operations experience, supporting customers in On Premise, Cloud, and Hybrid setups. Excellent communication skills, with a strong ability to discuss complex technical concepts with customers, engineers, and product managers Prior experience managing a team of frontline and senior-level Support Engineers Solid understanding of data platforms, data management, analytics or the BI space Excellent communication skills, with a strong ability to discuss complex technical concepts with customers, engineers, and product managers Self-starter with strong creative problem-solving, facilitation and interpersonal skills First-hand leadership experience working in a global organization and partnering with regional managers and leads to ensure a seamless customer experience Experience troubleshooting Linux and running shell commands Understanding of Relational Databases, such Oracle and Postgres. SQL is a must. A big plus if you have experience in the following areas: Postgres (DB internals) Elasticsearch, NoSQL, MongoDB Hadoop Ecosystem (Hive, HBase) Cloud technologies and frameworks such as Kubernetes and Docker Experience scoping or building tooling to improve the support experience Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work. #LI-Hybrid #LI-SR1 Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job Req ID: 47375 Location: Hyderabad, IN Function: Technology/ IOT/Cloud About: Role Overview: We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities: Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis , feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills: 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python , and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers , etc. Experience in end-to-end model lifecycle management , from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL , big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills: Experience with vector databases , embeddings , and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics , geo-spatial modeling , or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects.

Posted 1 week ago

Apply

8.0 years

2 - 6 Lacs

Hyderābād

On-site

GlassDoor logo

Job Req ID: 47376 Location: Hyderabad, IN Function: Technology/ IOT/Cloud About: Role Overview: We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities: Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis , feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills: 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python , and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers , etc. Experience in end-to-end model lifecycle management , from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL , big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills: Experience with vector databases , embeddings , and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics , geo-spatial modeling , or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune

Remote

GlassDoor logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Use Design thinking and a consultative approach to conceive cutting edge technology solutions for business problems, mining core Insights as a service model Engage with project activities across the Information lifecycle. Understanding client requirements, develop data analytics strategy and solution that meets client requirements Apply knowledge and explain the benefits to organizations adopting strategies relating to NextGen/ New age Data Capabilities Be proficient in evaluating new technologies and identifying practical business cases to develop enhanced business value and increase operating efficiency Architect large scale AI/ML products/systems impacting large scale clients across industry Own end to end solutioning and delivery of data analytics/transformation programs Mentor and inspire a team of data scientists and engineers solving AI/ML problems through R&D while pushing the state-of-the-art solution Liaise with colleagues and business leaders across Domestic & Global Regions to deliver impactful analytics projects and drive innovation at scale Assist sales team in reviewing RFPs, Tender documents, and customer requirements Developing high-quality and impactful demonstrations, proof of concept pitches, solution documents, presentations, and other pre-sales assets Have in-depth business knowledge across a breath of functional areas across sectors such as CPRD/FS/MALS/Utilities/TMT Your Profile B.E. / B.Tech. + MBA (Systems / Data / Data Science/ Analytics / Finance) with a good academic background Minimum 10 years + on Job experience in data analytics with at least 7 years of CPRD, FS, MALS, Utilities, TMT or other relevant domain experience required Specialization in data science, data engineering or advance analytics filed is strongly recommended Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision Good, applied statistics skills, such as distributions, statistical inference & testing, etc. Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc). Proficient in coding in common data science language & tools such as R, Python, Go, SAS, Matlab etc. At least 7 years’ experience deploying digital and data science solutions on large scale project is required At least 7 years’ experience leading / managing a data Science team is required Exposure or knowledge in cloud (AWS/GCP/Azure) and big data technologies such as Hadoop, Hive What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the world’s largest e-Commerce products catalog. We power the online buying experience for customers worldwide so they can find, discover, and buy anything they want. Our massively scaled out distributed systems process hundreds of millions of updates on the billions of products across physical, digital, and services offerings. You will be part of Catalog Support Programs (CSP) team under Catalog Support Operations (CSO) in ASCS Org. CSP provides program management, technical support, and strategic initiatives to enhance the customer experience, owning the implementation of business logic and configurations for ASCS. We are establishing a new centralized Business Intelligence team to build self-service analytical products for ASCS that provide relevant insights and data deep dives across the business. By leveraging advanced analytics and AI/ML, we will transform catalog data into predictive insights, helping prevent customer issues before they arise. Real-time intelligence will support proactive decision-making, enabling faster, data-driven decisions across the organization and driving long-term growth and an enhanced customer experience. We are looking for an innovative, highly-motivated, and experienced Business Intelligence Engineer who can think holistically about problems to understand how systems work together. You will work closely with engineering teams, product managers, program managers, and organizational leaders to deliver end-to-end data solutions aimed at continuously enhancing overall ASCS business performance and delivery quality. As a Senior BIE, you will lead the data and reporting requirements for ASCS programs and projects. Your role will involve close engagement with senior leaders to generate insights and conduct deep dives into key metrics that directly influence organizational strategic decisions and priorities. You will demonstrate a high proficiency in complex SQL scripting, often combining various data sets from diverse sources. You will own the design, development, and maintenance of ongoing metrics, reports, dashboards, etc. to drive key business decisions. You will simplify and automate reporting, audits, and other data-driven activities. You will develop and drive best practices in data integrity, consistency, validations, and documentation. You will serve as a technical and analytical leader for the team, providing guidance and expertise to others on complex business and data challenges. Consistently deliver high-quality, timely results that demonstrate your deep subject matter expertise. This role requires an individual with excellent analytical abilities, deep knowledge of business intelligence solutions, as well as business acumen and the ability to work with various tech and product teams across ASCS. The ideal candidate should have excellent business and communication skills to work with business owners to define roadmaps, develop milestones, define key business questions, and build data sets that answer those questions. You should have hands-on SQL and scripting language experience, and excel in designing, implementing, and operating stable, scalable, low-cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You will be instrumental in the creation of a reliable and scalable infrastructure for ongoing reporting and analytics. You will structure ambiguous problems and design analytics across various disciplines, resulting in actionable recommendations ranging from strategic planning, product strategy/launches, and engineering improvements. You will work closely with internal stakeholders to define key performance indicators (KPIs), implement them into dashboards and reports, and present insights in a concise and effective manner. This role will involve collaborating with business and tech leaders within ASCS and cross-functional teams to solve problems, create operational efficiencies, and deliver against high organizational standards. You should be able to apply a breadth of tools, data sources, and analytical techniques to answer a wide range of high-impact business questions and proactively uncover new insights that drive decision-making by senior leadership. As a key member of the CSP team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. Key job responsibilities Lead the design, implementation, and delivery of BI solutions for ASCS Manage and execute end-to-end projects, including stakeholder management, data gathering/manipulation, modeling, problem-solving, and communication of insights Design, build, and maintain automated reporting, dashboards, and ongoing analysis to enable data-driven decisions Report key insight trends using statistical rigor to inform the larger team of business-impacting developments Retrieve and analyze data using a broad set of Amazon's data technologies and resources Earn the trust of customers and stakeholders by understanding their needs and solving problems with technology Work closely with business stakeholders and senior leadership to review roadmaps and contribute to strategy Apply multi-domain expertise to own the end-to-end roadmap and analytical approach for complex problems Translate business requirements into analysis plans, review with stakeholders, and maintain high execution standards Proactively work with stakeholders to define use cases and standardized analytical outputs Scale data processes and reports through efficient query development and automation Demonstrate deep knowledge of available data sources to enable comparative and complex analyses Actively manage project timelines, communicate with stakeholders, and represent the team on initiatives Build and manage high-impact business review metrics, reports, and dashboards Provide BI solutions for loosely defined problems, deliver large-scale analytical projects, and highlight new opportunities Optimize code quality and BI processes to drive continuous improvement Extract, transform, and load data from multiple sources using SQL, scripting, and ETL tools A day in the life A day in the life of a BIE-III will include: Working closely with cross-functional teams including Product/Program Managers, Software Development Managers, Applied/Research/Data Scientists, and Software Developers. Lead the BIE team and own the execution of BIE projects. Building dashboards, performing root cause analysis, and sharing actionable insights with stakeholders to enable data-informed decision making Leading reporting and analytics initiatives to drive data-informed decision making Designing, developing, and maintaining ETL processes and data visualization dashboards using Amazon QuickSight Transforming complex business requirements into actionable analytics solutions Solving ambiguous analyses with less well-defined inputs and outputs, driving to the heart of the problem and identifying root causes Handling large data sets in analysis through the use of additional tools Deriving recommendations from analysis that significantly impact a department, create new processes, or change existing processes Understanding the basics of test and control comparison, and providing insights through basic statistical measures such as hypothesis testing Identifying and implementing optimal communication mechanisms based on the data set and the stakeholders involved Communicating complex analytical insights and business implications effectively About The Team This central BIE team within ASCS will be responsible for building a structured analytical data layer, bringing in BI discipline by defining metrics in a standardized way and establishing a single definition of metrics across the catalog ecosystem. They will also identify clear sources of truth for critical data. The team will build and maintain the data pipelines for critical projects tailored to the needs of ASCS teams, leveraging catalog data to provide a unified view of product information. This will support real-time decision-making and empower teams to make data-driven decisions quickly, driving innovation. This team will leverage advanced analytics that can shift us to a proactive, data-driven approach, enabling informed decisions that drive growth and enhance the customer experience. This team will adopt best practices, standardize metrics, and continuously iterate on queries and data sets as they evolve. Automated quality controls and real-time monitoring will ensure consistent data quality across the organization. Basic Qualifications 10+ years of professional or military experience 6+ years of SQL experience Experience programming to extract, transform and clean large (multi-TB) data sets Experience with theory and practice of design of experiments and statistical analysis of results Experience with AWS technologies Experience in scripting for automation (e.g. Python) and advanced SQL skills. Experience with theory and practice of information retrieval, data science, machine learning and data mining Experience in the data/BI space Knowledge of data warehousing and data modeling Demonstrate proficiency in SQL, data analysis, and data visualization tools like Amazon QuickSight/Tableau to drive data-driven decision making Experience with statistical analytics and programming languages (e.g., Python, Java, Ruby, R) and big data technologies/languages (e.g. Spark, Hive, Hadoop, PyTorch, PySpark) to build and maintain data pipelines and ETL processes Experience applying basic statistical methods (e.g. regression, t-test, Chi-squared) as well as exploratory, deterministic, and probabilistic analysis techniques to solve complex business problems. Track record of generating key business insights and collaborating with stakeholders. Experience working directly with business stakeholders to translate between data and business needs Superior verbal and written communication and presentation skills, experience working across functional teams and senior stakeholders. Track record of building automated, scalable analytical solutions Bachelors or Masters in Computer Science, Mathematics, Statistics, Operations Research, Data Science, Economics, Business Administration, or a similar related discipline Preferred Qualifications Experience working directly with business stakeholders to translate between data and business needs Experience managing, analyzing and communicating results to senior leadership Master's degree in statistics, data science, or an equivalent quantitative field Experience building measures and metrics, and developing reporting solutions Experience using Cloud Storage and Computing technologies such as AWS Redshift, S3, Hadoop, etc. Experience building and maintaining data pipelines and ETL processes Experience with statistical analysis, co-relation analysis, as well as exploratory, deterministic, and probabilistic analysis techniques Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2945833 Show more Show less

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies