Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities On your first day, we'll expect you to have: BS in Computer Science or equivalent experience with 3+ years as a Data Engineer or a similar role Programming skills in Python & Java (good to have) Design data models for storage and retrieval to meet product and requirements Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka) Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering Enhance data quality through internal tools/frameworks detecting DQ issues. Working knowledge of relational databases and SQL query authoring We'd Be Super Excited If You Have Followed a Kappa architecture with any of your previous deployments and domain knowledge of Finance/Financial Systems Qualifications Our perks & benefits Atlassian offers a variety of perks and benefits to support you, your family and to help you engage with your local community. Our offerings include health coverage, paid volunteer days, wellness resources, and so much more. Visit go.atlassian.com/perksandbenefits to learn more. About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
LinkedIn is the world’s largest professional network, built to create economic opportunity for every member of the global workforce. Our products help people make powerful connections, discover exciting opportunities, build necessary skills, and gain valuable insights every day. We’re also committed to providing transformational opportunities for our own employees by investing in their growth. We aspire to create a culture that’s built on trust, care, inclusion, and fun, where everyone can succeed. Join us to transform the way the world works. This role will be based in Bangalore, India. At LinkedIn, we trust each other to do our best work where it works best for us and our teams. This role offers a hybrid work option, meaning you can both work from home and commute to a LinkedIn office, depending on what’s best for you and when it is important for your team to be together. As part of our world-class software engineering team, you will be charged with building the next-generation infrastructure and platforms for LinkedIn, including but not limited to: an application and service delivery platform, massively scalable data storage and replication systems, cutting-edge search platform, best-in-class AI platform, experimentation platform, privacy and compliance platform etc. You will work and learn among the best, putting to use your passion for distributed technologies and algorithms, API design and systems-design, and your passion for writing code that performs at an extreme scale. LinkedIn has already pioneered well-known open-source infrastructure projects like Apache Kafka, Pinot, Azkaban, Samza, Venice, Datahub, Feather, etc. We also work with industry standard open source infrastructure products like Kubernetes, GRPC and GraphQL - come join our infrastructure teams and share the knowledge with a broader community while making a real impact within our company. Responsibilities: - You will own the technical strategy for broad or complex requirements with insightful and forward-looking approaches that go beyond the direct team and solve large open-ended problems. - You will design, implement, and optimize the performance of large-scale distributed systems with security and compliance in mind. - You will Improve the observability and understandability of various systems with a focus on improving developer productivity and system sustenance - You will effectively communicate with the team, partners and stakeholders. - You will mentor other engineers, define our challenging technical culture, and help to build a fast-growing team - You will work closely with the open-source community to participate and influence cutting edge open-source projects (e.g., Apache Iceberg) - You will deliver incremental impact by driving innovation while iteratively building and shipping software at scale - You will diagnose technical problems, debug in production environments, and automate routine tasks Basic Qualifications: - BA/BS Degree in Computer Science or related technical discipline, or related practical experience. - 8+ years of industry experience in software design, development, and algorithm related solutions. - 8+ years experience programming in object-oriented languages such as Java, Python, Go and/or Functional languages such as Scala or other relevant coding languages - Hands on experience developing distributed systems, large-scale systems, databases and/or Backend APIs Preferred Qualifications: - Experience with Hadoop (or similar) Ecosystem (Gobblin, Kafka, Iceberg, ORC, MapReduce, Yarn, HDFS, Hive, Spark, Presto) - Experience with industry, open-source projects and/or academic research in data management, relational databases, and/or large-data, parallel and distributed systems - Experience in architecting, building, and running large-scale systems - Experience with open-source project management and governance Suggested Skills: - Distributed systems - Backend Systems Infrastructure - Java You will Benefit from our Culture: We strongly believe in the well-being of our employees and their families. That is why we offer generous health and wellness programs and time away for employees of all levels. India Disability Policy LinkedIn is an equal employment opportunity employer offering opportunities to all job seekers, including individuals with disabilities. For more information on our equal opportunity policy, please visit https://legal.linkedin.com/content/dam/legal/Policy_India_EqualOppPWD_9-12-2023.pdf Global Data Privacy Notice for Job Candidates This document provides transparency around the way in which LinkedIn handles personal data of employees and job applicants: https://legal.linkedin.com/candidate-portal Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Background Praan (Praan, Inc.) is an impact focused deep-tech startup democratizing clean air using breakthrough filterless technology. The company is backed by top tier VCs and CXOs globally and currently operates between the United States and India. Our team puts extreme attention to detail and loves building technology that's aspirational. Praan's team and culture is positioned to empower people to solve large global problems at an accelerated pace Why Everyone worries about the dooms-day in climate change which is expected to occur in the 2050s. However, there's one doom's day which is the reality for millions of people around the world today. Air pollution takes more than 7 Million lives globally every single year. Over 5% of premature children death occur due to air pollution in developing countries. Everyone has relied on governments or experts to solve the problem, but most solutions up until today have either been too expensive or too ineffective. Praan is an attempt at making the future cleaner, healthier, and safer for the generations to come Job Description Supervise, monitor, and coordinate all production activities across the HIVE and MKII assembly lines Ensure adherence to daily, weekly, and monthly production targets while maintaining product quality and minimizing downtime Implement and sustain Kaizen, 5S, and other continuous improvement initiatives to enhance line efficiency and reduce waste Overlook daily start of day and end of day inventory reporting Ensure line balancing for optimal resource utilization and minimal bottlenecks Monitor and manage manpower deployment, shift scheduling, absentee management and skill mapping to maintain productivity Drive quality standards by coordinating closely with Manufacturing Lead Track and analyze key production KPIs (OEE, yield, downtime) and initiate corrective actions Ensure adherence to SOPs, safety protocols, and compliance standards Support new product introductions (NPIs) or design changes in coordination with R&D/engineering teams Train and mentor line operators and line leaders, ensuring training, skill development, and adherence to performance standards. Monitor and report on key production metrics, including output, downtime, efficiency, scrap rates, and productivity, ensuring targets are met consistently Maintain documentation and reports related to production planning, line output, incidents, and improvements Skill Requirements Diploma/Bachelor's degree in Mechanical, Production, Electronics, Industrial Engineering, or related field 4–8 years of hands-on production supervision experience in a high-volume manufacturing environment managing the production of multiple products Proven expertise in Kaizen, Lean Manufacturing, Line Balancing, and Shop Floor Management Proven ability to manage large teams, allocate resources effectively, and meet production targets in a fast-paced, dynamic environment Experience with production planning, manpower management, and problem-solving techniques (like 5 Why, Fishbone, etc.) Strong understanding of manufacturing KPIs and process documentation Excellent leadership, communication, and conflict-resolution skills Hands-on attitude with a willingness to work on-ground Experience in automotive, consumer electronics, or similar high-volume industries Praan is an equal opportunity employer and does not discriminate based on race, religion, caste, gender, disability or any other criteria. We just care about working with great human beings! Show more Show less
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We have an opportunity with one of our clients and below is the detailed job description: Experience Range - 5 to 12 Yrs Job Location - Coimbatore & Chennai Event Date - 14-Jun-25 | Face to Face | Coimbatore Interested candidates must be available for the event on 14-June-25 Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks or GCP 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology WE OFFER 1. Opportunity to work on technical challenges that may impact across geographies 2. Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications 3. Opportunity to share your ideas on international platforms 4. Sponsored Tech Talks & Hackathons 5. Possibility to relocate to any EPAM office for short and long-term projects 6. Focused individual development 7. Benefit package: · Health benefits, Medical Benefits · Retirement benefits · Paid time off · Flexible benefits 1. Forums to explore beyond work passion (CSR, photography, painting, sports, etc.) Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your Primary Responsibilities Include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Preferred Education Master's Degree Required Technical And Professional Expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred Technical And Professional Experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala Show more Show less
Posted 2 weeks ago
12.0 - 20.0 years
35 - 40 Lacs
Navi Mumbai
Work from Office
Job Title: Big Data Developer and Project Support & Mentorship Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 2 weeks ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? International Card Services (ICS) Risk & Control is looking for a Manager ICS Complaint Program Reporting and Insights with specific focus on establishing the Reporting and Business Insights workstream for Complaints Program in line with the requirements of AEMP71. It will involve extensive collaboration with multiple partners across Global Servicing Group, International markets and legal entities and ICS Control Management. The Manager – ICS Complaint Program - Reporting and Insights will: Lead and develop the ICS Complaints Reporting and Insights program Will establish the analytics, insights, and regulatory reporting for ICS Complaint’s Program Collaborate directly with senior leaders to help them understand complaints trends and how they can respond to them. Identify complaint themes leveraging data insights and referring them to ICS and LE leadership as appropriate Proactively analyze risk trends, undertake root-cause analysis, and provide consultative support to business and stakeholders. Ensure all regulatory requests are managed with 100% accuracy and timeliness. The Manager, Complaints Reporting and Insights will: Design, build and maintain dashboards and automated reports leveraging ICS Complaints data Analyze complaint data to help identify root cause, areas of concern and potential issues Compile thematic risk reporting (levels, trends, causes) to provide actionable and meaningful insights into business on current risk levels, emerging trends, and root causes Translate complex data into concise, impactful visualizations and presentations for decision making Proactively identify opportunities to improve data quality, reporting processes and analytical capabilities Utilize Natural Language Processing and generative AI tools to automate report generation, summarize data insights and improve data storytelling Collaborate with stakeholders to define KPIs, reporting needs and performance metrics Research and implement AI driven BI innovations to continuously enhance business insights and reporting best practices Required Qualifications: 5+ years of experience in Data Analytics, generating Business Insights or similar role Proficient analytical and problem-solving skills, with an ability to analyze data, identify trends, and evaluate risk scenarios effectively Hands-on experience with Python, R, Tableau Developer or Tableau Desktop Certified Professional, Power BI, Cornerstone, SQL, HIVE, Advance MS Excel (Macros, Pivots). Hands on experience with AI/ ML frameworks, NLP, Sentiment analysis and Text summarization etc. Strong analytical, critical thinking and problem-solving skills Ability to communicate complex findings clearly to both technical and non-technical audiences Preferred Qualifications: Bachelor’s degree in business, Risk Mgmt, Statistics, Computer Science, or related field; advanced degrees (e.g., MBA, MSc) or certifications are advantageous Experience in at least one of the following: Providing identification of operational risks throughout business processes and systems Enhancing risk assessments and associated methodologies Reviewing and creating thematic risk reporting to provide actionable insights into risk levels, emerging trends, and root causes Experience in the financial services industry Experience in Big Data, Data Science will be a definite advantage Familiarity with ERP systems or business process tools Knowledge of predictive analytics We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Python, Spark, Data Engineer, Cloudera, Onpremise, Azure, Snlowfow, Kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title: Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Python, Apache Spark, Snowflake, data engineer, spark, kafka, azure, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* EIT is a centralized group within Global Risk Management responsible for independent testing of operational processes within the eight lines of business and enterprise control functions at Bank of America to ensure the company is in compliance with domestic and international laws, rules and regulations, and that risk and controls procedures are operating effectively. Job Description* The Sampling and Test Development Specialist II, with minimal supervision, works in collaboration with the Test Owners, Front Line Units, other Sampling and Test Development resources along with the Test Execution Teams to design and assess the quality of both manual and automated tests, validate data sourcing, conduct required sampling governance or distribute samples for testing, and design or revise sampling procedure documentation, with expert level efficiency and quality. This includes driving test structure to support automation. They will make required changes to new and existing test scripts and test plan documentation, as well as sample and data requirements and maintain integrity within the system of record. The Sampling and Test Development Specialist II will independently gather test scripting and data requirements and work with data partners to ensure appropriate test design and sampling requirements are incorporated in to the Test Plans Evaluates if pilot testing is required and participate in testing as needed and participates in other phases of testing (intake, execution, reporting) to provide expertise and feedback on areas assigned. Maintains SOR (System of Record) tracking of test status per standards. Provides peer coaching and direction where needed. Responsibilities* This role is responsible for accessing pertinent databases or acquiring raw data from third party sources along with all associated documentation and for documenting business procedures within testing scripts. The Sampling and Test Development Specialist II often acts independently to uncover and resolve issues associated with procurement of data to be used for testing and the structure and design of complex tests. This role will deliver high-quality results and manage, manipulate and summarize large quantities of data. The Sampling and Test Development Specialist II must participate in and occasionally lead additional projects across Sampling and Test Development including escalation of areas requiring process refinement and revision and taking leadership role to affect changes when needed. Requirements* Education : Graduates or Post-Graduates in Computer Science, Software Engineering, Statistics. Tech/B.E./B.Sc.(Statistics)/B.C.A./M.C.A/M.Sc. (Statistics) Certifications If Any - NA Experience Range : 4-6 yrs Foundational skills* Advanced understanding of automation tools and ability to influence test owners to define ways to structure tests in an automated fashion. Advanced knowledge of data warehouse and mining concepts and baseline understanding of SAS/SQL query language and syntax Experience building queries to source data from a variety of different types of data sources such as DB2, Teradata, Oracle, SQL Server, Hadoop, Hive, Python Proficiency with MS Office suite with an emphasis on Excel to perform data analytics, pivot tables, lookups and data analytics Proven ability to leverage automation efficiencies, tools, and capabilities where possible. Experience building data acquisition routines in a tool such as Trifacta, Alteryx, MicroStrategy, Tableau, Cognos, Python (or other similar business intelligence applications) Strong research, analytical, problem-solving, and technical skills Demonstrated project management skills; Ability to handle multiple competing priorities with demonstrated success at achieving SLA (Service Level Agreements) Strong partnership and influencing skills. Excellent verbal and written communication skills as well as interpersonal skills Self-starter, organized, versatile, capable of performing work independently with minimal direction. Ability to think independently, solve complex problems, and develop integrated solutions. Ability to translate business objectives to comprehensive test requirements. Demonstrated ability to interface effectively with Senior management. Strong team player Desired skills: Compliance or Risk certification a plus Work Timings: 1.30 PM - 10.30 PM Job Location: Chennai Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Assistant CMI Manager - Beverages Work Location: Mumbai HO Scope: Beverages – Coffee, Green Tea, New Beverages Brands/Formats (Performance, Strategy & Innovation), India Function/BU: Nutrition, CMI Main Job Purpose CMI's mission is to provide inspiration and provocation to drive transformational growth. This means delivering insights, ideas, and impact in a world that is changing at an accelerating pace of change. As a member of the CMI team, you will be responsible for partnering with key stakeholders in the organization to drive these kinds of actions and growth. Unilever is looking for a passionate Consumer Market Insights Manager who can help drive the Coffee & Non-Milk Tea business in India, partnering the Bru, Lipton, and any upcoming New Brand in Unilever's beverages portfolio. The essence of the role is to ensure consumer centricity in every aspect of the tea business. Strong analytical/problem-solving skills, ability to deal with complexity, and business partnering are critical skills needed to succeed in this role. Key Responsibilities Lead all consumer research for our tea brands in India, ranging from performance to mix/communication testing, to ensure we win in the market Partner closely with the Beverages lead and Director of Nutrition on larger strategic pieces in Coffee, Green Tea & beverages space Anchor the future of the coffee & non-milk tea program in India in terms of white space identification and inspiring innovation Work with a multi-functional team across Marketing, Finance, CSP, R&D, and CTI teams Proactively identify pockets of growth and diagnose hotspots to drive incremental growth for all our tea brands in India Drive product love and immersion within the team. Deep involvement in product development and competition tracking to help our portfolio stay best in class Helping our brands stay ahead of the curve in terms of positioning and unlocking the next wave of winning communication. Be the SPOC for external research agencies, drive excellence and efficiency Look to bring in new/agile methodologies to keep CMI on the cutting edge Key Interfaces CMI – Global/Business Units/HIVE / PDC Marketing – Assistant Brand Managers, Brand Managers, Brand Directors, Brand Vice President Research and advertising agencies R&D, Consumer Technical Insight Coffee & Tea Finance Team Coffee & Tea CSP/CM Team Skills/ Experience/ Qualifications Ability to pull and decipher data to make cogent insights using start industry Panels like RMS, KWP will be preferred Role will involve substantial Brand Development & Communication Testing work, including strategic projects like qualitative Brand Equity, etc. While prior experience is not necessary or expected, the candidate should be willing to learn Testing methodologies on Concept, Pack, Product, Proposition, Influencer Content Measurement & Testing, etc. Proven market research experience on either the client or agency side with key quantitative and qualitative techniques Able to analyse and tell a story with data/information from different sources Demonstrated strong ownership, collaboration, communication, and presentation skills Strong team player who can partner effectively with a cross-functional team Willingness and flexibility to offer full support to the brand team and CMI colleagues Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
15 - 25 Lacs
Pune
Work from Office
About the role As an Ab Initio Admin, you will make an impact by leveraging your expertise in data warehousing and ETL processes & driving data integration solutions. You will be a valued member of the AI Analytics group and work collaboratively with CMT team members. In this role, you will: Develop and implement efficient ETL processes using Ab Initio tools to ensure seamless data integration and transformation Collaborate with cross-functional teams to gather requirements and design data solutions that meet business needs Optimize data warehousing solutions by applying advanced scheduling techniques and SQL queries Troubleshoot and resolve data-related issues to maintain system reliability and performance Provide technical expertise in Ab Initio Conduct>It and Co>Operating System to enhance data processing capabilities Ensure data accuracy and consistency by conducting thorough testing and validation of ETL processes Work model We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 3 days a week in a client or Cognizant office in your respective work location. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs. What you must have to be considered Possess strong knowledge of data warehousing concepts and scheduling basics to design effective data solutions Demonstrate proficiency in ETL processes and SQL for efficient data manipulation and transformation Have hands-on experience with Ab Initio GDE and Conduct>It for robust data integration Utilize Unix Shell Scripting to automate routine tasks and improve operational efficiency These will help you stand out Show expertise in Ab Initio Co>Operating System to optimize data processing workflows Exhibit skills in Unix Shell Scripting to automate tasks and streamline operations Display a collaborative mindset to work effectively in a hybrid work model and day shift environment We're excited to meet people who share our mission and can make an impact in a variety of ways. Don't hesitate to apply, even if you only meet the minimum requirements listed. Think about your transferable experiences and unique skills that make you stand out as someone who can bring new and exciting things to this role.
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Data Science Analyst 2 is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Responsibilities: The Data Engineer is responsible for building Data Engineering Solutions using next generation data techniques. The individual will be working with tech leads, product owners, customers and technologists to deliver data products/solutions in a collaborative and agile environment. Responsible for design and development of big data solutions. Partner with domain experts, product managers, analyst, and data scientists to develop Big Data pipelines in Hadoop Responsible for moving all legacy workloads to cloud platform Work with data scientist to build Client pipelines using heterogeneous sources and provide engineering services for data science applications Ensure automation through CI/CD across platforms both in cloud and on-premises Define needs around maintainability, testability, performance, security, quality and usability for data platform Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes Convert SAS based pipelines into languages like PySpark, Scala to execute on Hadoop, Snowflake and non-Hadoop ecosystems Tune Big data applications on Hadoop, Cloud and non-Hadoop platforms for optimal performance Applies in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinates and contributes to the objectives of the entire function. Produces detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-4 years of total IT experience Experience with Hadoop (Cloudera)/big data technologies /Cloud/AI tools Hands-on experience with HDFS, MapReduce, Hive, Impala, Spark, Kafka, Kudu, Kubernetes, Dashboard tools, Snowflake builts, AWS tools, AI/ML libraries and tools, etc) Experience on designing and developing Data Pipelines for Data Ingestion or Transformation. System level understanding - Data structures, algorithms, distributed storage & compute tools, SQL expertise, Shell scripting, Schedule tools, Scrum/Agile methodologies. Can-do attitude on solving complex business problems, good interpersonal and teamwork skills Education: Bachelor’s/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Science ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Associate Managing Consultant – Performance Analytics-2 Associate Managing Consultant – Performance Analytics Advisors & Consulting Services Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. Our Advisors & Consulting Services team combines traditional management consulting with Mastercard’s rich data assets, proprietary platforms, and technologies to provide clients with powerful strategic insights and recommendations. Our teams work with a diverse global customer base across industries, from banking and payments to retail and restaurants. The Advisors & Consulting Services group has five specializations: Strategy & Transformation, Performance Analytics, Business Experimentation, Marketing, and Program Management. Our Performance Analytics consultants translate data into insights by leveraging Mastercard and customer data to design, implement, and scale analytical solutions for customers. They use qualitative and quantitative analytical techniques and enterprise applications to synthesize analyses into clear recommendations and impactful narratives. Positions for different specializations and levels are available in separate job postings. Please review our consulting specializations to learn more about all opportunities and apply for the position that is best suited to your background and experience: https://careers.mastercard.com/us/en/consulting-specializations-at-mastercard Roles and Responsibilities Client Impact Manage deliverable development and workstreams on projects across a range of industries and problem statements Contribute to and/or develop analytics strategies and programs for large, regional, and global clients by leveraging data and technology solutions to unlock client value Manage working relationship with client managers, and act as trusted and reliable partner Create predictive models using segmentation and regression techniques to drive profits Review analytics end-products to ensure accuracy, quality and timeliness. Proactively seek new knowledge and structures project work to facilitate the capture of Intellectual Capital with minimal oversight Team Collaboration & Culture Develop sound business recommendations and deliver effective client presentations Plan, organize, and structure own work and that of junior project delivery consultants to identify effective analysis structures to address client problems and synthesize analyses into relevant findings Lead team and external meetings, and lead or co-lead project management Contribute to the firm's intellectual capital and solution development Grow from coaching to enable ownership of day-to-day project management across client projects, and mentor junior consultants Develop effective working relationships with local and global teams including business partners Qualifications Basic qualifications Undergraduate degree with data and analytics experience in business intelligence and/or descriptive, predictive, or prescriptive analytics Experience managing clients or internal stakeholders Ability to analyze large datasets and synthesize key findings to provide recommendations via descriptive analytics and business intelligence Knowledge of metrics, measurements, and benchmarking to complex and demanding solutions across multiple industry verticals Data and analytics experience such as working with data analytics software (e.g., Python, R, SQL, SAS) and building, managing, and maintaining database structures Advanced Word, Excel, and PowerPoint skills Ability to perform multiple tasks with multiple clients in a fast-paced, deadline-driven environment Ability to communicate effectively in English and the local office language (if applicable) Eligibility to work in the country where you are applying, as well as apply for travel visas as required by travel needs Preferred Qualifications Additional data and analytics experience working with Hadoop framework and coding using Impala, Hive, or PySpark or working with data visualization tools (e.g., Tableau, Power BI) Experience managing tasks or workstreams in a collaborative team environment Experience coaching junior delivery consultants Relevant industry expertise MBA or master’s degree with relevant specialization (not required) Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-249259 Show more Show less
Posted 2 weeks ago
3.0 - 8.0 years
15 - 30 Lacs
Pune
Hybrid
Job Title: Data Engineer Location: Work from Office (Hybrid) Job Location: Magarpatta, Pune Shift timing: 11 am to 8 pm Job responsibilities: Design, develop, and maintain ETL pipelines using Informatica PowerCenter or Talend to extract, transform, and load data into EDW systems and data lake. Optimize and troubleshoot complex SQL queries and ETL jobs to ensure efficient data processing and high performance. Technologies - SQL, Informatica Power center, Big Data, Hive Need combination of skills for below stack : SQL, Informatica Power center, Hive, Talend
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Location : Mumbai Experience : 0-6months Technologies / Skills: Advanced SQL, Python and associated libraries like Pandas, Numpy etc., Pyspark , Shell scripting, Data Modelling, Big data, Hadoop, Hive, ETL pipelines. Responsibilities Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and develop data engineering strategy. Ability to work with business owners to define key business requirements and convert to user stories with required technical specifications. Communicate results and business impacts of insight initiatives to key stakeholders to collaboratively solve business problems. Working closely with the overall Enterprise Data & Analytics Architect and Engineering practice leads to ensure adherence with the best practices and design principles. Assures quality, security and compliance requirements are met for supported area. Design and create fault-tolerance data pipelines running on cluster Excellent communication skills with the ability to influence client business and IT teams Should have design data engineering solutions end to end. Ability to come up with scalable and modular solutions Required Qualification 0-6months of hands-on experience Designing and developing Data Pipelines for Data Ingestion or Transformation using Python (PySpark)/Spark SQL in AWS cloud Experience in design and development of data pipelines and processing of data at scale. Advanced experience in writing and optimizing efficient SQL queries with Python and Hive handling Large Data Sets in Big-Data Environments Experience in debugging, tunning and optimizing PySpark data pipelines Should have implemented concepts and have good knowledge of Pyspark data frames, joins, caching, memory management, partitioning, parallelism etc. Understanding of Spark UI, Event Timelines, DAG, Spark config parameters, in order to tune the long running data pipelines. Experience working in Agile implementations Experience with building data pipelinesin streaming and batch mode. Experience with Git and CI/CD pipelines to deploy cloud applications Good knowledge of designing Hive tables with partitioning for performance. Desired Qualification Experience in data modelling Hands on creating workflows on any Scheduling Tool like Autosys, CA Workload Automation Proficiency in using SDKsfor interacting with native AWS services Strong understanding of concepts of ETL, ELT and data modeling. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
hackajob is collaborating with American Express to connect them with exceptional tech professionals for this role. You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. With a focus on digitization, innovation, and analytics, the Enterprise Digital teams creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as well as long-term transformation of our business. A unique aspect of Enterprise Digital Teams is the integration of diverse skills across all its remit. Enterprise Digital Teams has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Experimentation & Analytics (EDEA) leads the Enterprise Product Analytics and Experimentation charter for Brand & Performance Marketing and Digital Acquisition & Membership experiences as well as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels & evolve our digital experiences with actionable insights & analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. About This Role This role will report to the Manager of Membership experience analytics team within Enterprise Digital Experimentation & Analytics (EDEA) and will be based in Gurgaon. The candidate will be responsible for delivery of highly impactful analytics to optimize our Digital Membership Experiences across Web & App channels. Deliver strategic analytics focused on Digital Membership experiences across Web & App aimed at optimizing our Customer experiences Define and build key KPIs to monitor the acquisition journey performance and success Support the development of new products and capabilities Deliver read out of experiments uncovering insights and learnings that can be utilized to further optimize the customer journey Gain deep functional understanding of the enterprise-wide product capabilities and associated platforms over time and ensure analytical insights are relevant and actionable Power in-depth strategic analysis and provide analytical and decision support by mining digital activity data along with AXP closed loop data Minimum Qualifications Advanced degree in a quantitative field (e.g. Finance, Engineering, Mathematics, Computer Science) Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. Experience in large data processing and handling, understanding in data science is a plus. Ability to work in a dynamic, cross-functional environment, with strong attention to detail. Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner. Basic knowledge of statistical techniques for experimentation & hypothesis testing, regression, t-test, chi-square test. Benefits We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms - United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of experience with below mentioned Skills Experience in designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines using Azure DevOps, Jenkins, or similar tools Experience with monitoring and logging tools like Azure Monitor, Log Analytics, and Application Insights for performance and reliability management Hands-on experience in enhancing ADF(Azure Data Factory) pipelines, Pyspark code Experience on supporting Bigdata platform (Hadoop, Hive) and SQL scripting jobs Proven expertise in version control systems, particularly Git, for managing and tracking code changes Proven to have Azure cloud exposure(Azure services like Virtual Machines, Load Balancer, SQL Database, Azure DNS, Blob Storage, AzureAD etc.) Proven excellent verbal communication and presentation skills Preferred Qualification Experience of Scala, Snowflake, Healthcare domain At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. Show more Show less
Posted 2 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB_POSTING-3-70939 Job Description Role Title: Analyst, Data Sourcing – Metadata (L08) Company Overview : Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles Organizational Overview Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth. Role Summary/Purpose The Analyst, Data Sourcing - Metadata (Individual Contributor) role is located in the India Analytics Hub (IAH) as part of Synchrony’s enterprise Data Office. This role is responsible for supporting metadata management processes within Synchrony’s Public and Private cloud and on-prem environments within the Chief Data Office. This role focuses on assisting with metadata harvesting, maintaining data dictionaries, and supporting the tracking of data lineage. The analyst will collaborate closely with senior team members to ensure access to accurate, well-governed metadata for analytics and reporting. Key Responsibilities Implement and maintain metadata management processes across Synchrony’s Public and Private cloud and on-prem environments, ensuring accurate integration with technical and business Metadata catalogs. Work with the Data Architecture and Data Usage teams to track data lineage, traceability, and compliance, identifying and escalating metadata-related issues. Document technical specifications, support solution design, participate in agile development, and release cycles for metadata initiatives. Adhere to data management policies, track KPIs for Metadata effectiveness and assist in assessment of metadata risks to strengthen governance. Maintain stable operations, troubleshoot metadata and lineage issues, and contribute to continuous process improvements to improve data accessibility. Required Skills & Knowledge Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Minimum of 1 years’ experience in data management, focusing on metadata management, data governance, or data lineage, with exposure to cloud environments (AWS, Azure, or Google Cloud) and on-premise infrastructure. Basic understanding of metadata management concepts, familiarity with data cataloging tools (e.g., AWS Glue Data Catalog, AbInitio, Collibra), basic proficiency in data lineage tracking tools (e.g., Apache Atlas, AbInitio, Collibra), and understanding of data integration technologies (e.g., ETL, APIs, data pipelines). Good communication and collaboration skills, strong analytical thinking and problem-solving abilities, ability to work independently and manage multiple tasks, and attention to detail. Desired Skills & Knowledge AWS certifications such as AWS Cloud practitioner, AWS Certified Data Analytics – Specialty Preferred Qualifications Familiarity with hybrid cloud environments (combination of cloud and on-prem). Skilled in Ab Initio Metahub development and support including importers, extractors, Metadata Hub database extensions, technical lineage, QueryIT, Ab Initio graph development, Ab Initio Control center and Express IT Experience with harvesting technical lineage and producing lineage diagrams. Familiarity with Unix, Linux, Stonebranch, and familiarity with database platforms such as Oracle and Hive Basic knowledge of SQL and data query languages for managing and retrieving metadata. Understanding of data governance frameworks (e.g., EDMC DCAM, GDPR compliance). Familiarity with Collibra Eligibility Criteria: Bachelor’s Degree, preferably in Engineering or Computer Science with more than 1 years’ hands-on Data Management experience or in lieu of a degree with more than 3 years’ experience. Work Timings: This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, LPP) L4 to L7 Employees who have completed 12 months in the organization and 12 months in current role and level are only eligible. L8 Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L04+ Employees can apply Grade/Level: 08 Job Family Group Information Technology Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. At Target, we have a timeless purpose and a proven strategy and that hasn’t happened by accident. Some of the best minds from diverse backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values diverse backgrounds. We believe your unique perspective is important, and you'll build relationships by being authentic and respectful. At Target, inclusion is part of the core value. We aim to create equitable experiences for all, regardless of their dimensions of difference. As an equal opportunity employer, Target provides diverse opportunities for everyone to grow and win Behind one of the world’s best loved brands is a uniquely capable and brilliant team of data scientists, engineers and analysts. The Target Data & Analytics team creates the tools and data products to sustainably educate and enable our business partners to make great data-based decisions at Target. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We’re also the source of the data and analytics behind Target’s Internet of Things (iOT) applications, fraud detection, Supply Chain optimization and demand forecasting. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or at Target.com. About This Career Role is about being passionate about data, analysis, metrics development, feature experimentation and its application to improve both business strategies, as well as support to GSCL operations team Develop, model and apply analytical best practices while upskilling and coaching others on new and emerging technologies to raise the bar for performance in analysis by sharing with others (clients, peers, etc.) well documented analytical solutions . Drive a continuous improvement mindset by seeking out new ways to solve problems through formal trainings, peer interactions and industry publications to continually improve technically, implement best practices and analytical acumen Be expert in specific business domain, self-directed and drive execution towards outcomes, understand business inter-dependencies, conduct detailed problem solving, remediate obstacles, use independent judgement and decision making to deliver as per product scope, provide inputs to establish product/ project timelines Participate in learning forums, or be a buddy to help increase awareness and adoption of current technical topics relevant for analytics competency e.g. Tools (R, Python); exploratory & descriptive techniques ( basic statistics and modelling) Champion participation in internal meetups, hackathons; presents in internal conferences, relevant to analytics competency Contribute the evaluation and design of relevant technical guides and tools to hire great talent by partnering with talent acquisition Participate in Agile ceremonies to keep the team up-to-date on task progress, as needed Develop and analyse data reports/Dashboards/pipelines, do RCA and troubleshooting of issues that arise using exploratory and systemic techniques About You B.E/B.Tech (2-3 years of relevant exp), M.Tech, M.Sc. , MCA (+2 years of relevant exp) Candidates with strong domain knowledge and relevant experience in Supply Chain / Retails analytics would be highly preferred Strong data understanding inference of patterns, root cause, statistical analysis, understanding forecasting/predictive modelling, , etc. Advanced SQL experience writing complex queries Hands on experience with analytics tools: Hadoop, Hive, Spark, Python, R, Domo and/or equivalent technologies Experience working with Product teams and business leaders to develop product roadmaps and feature development Able to support conclusions with analytical evidence using descriptive stats, inferential stats and data visualizations Strong analytical, problem solving, and conceptual skills. Demonstrated ability to work with ambiguous problem definitions, recognize dependencies and deliver impact solutions through logical problem solving and technical ideations Excellent communication skills with the ability to speak to both business and technical teams, and translate ideas between them Intellectually curious, high energy and a strong work ethic Comfort with ambiguity and open-ended problems in support of supply chain operations Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Show more Show less
Posted 2 weeks ago
5.0 - 10.0 years
7 - 17 Lacs
Bengaluru
Work from Office
About this role: Wells Fargo is seeking a Lead Data Management Analyst. We believe in the power of working together because great ideas can come from anyone. Through collaboration, any employee can have an impact and make a difference for the entire company. Explore opportunities with us for a career in a supportive environment where you can learn and grow. This role requires a blend of technical expertise, analytical thinking, and strategic decision making to drive impactful insights. At Wells Fargo, we are looking for talented people who will put our customers at the center of everything we do. We are seeking candidates who embrace diversity, equity and inclusion in a workplace where everyone feels valued and inspired. Help us build a better Wells Fargo. It all begins with outstanding talent. It all begins with you. Wells Fargo is seeking a Lead Data Management Analyst to drive data and analytics initiatives in the Payments Analytics and Insights Team. The team is responsible for supporting Digital Product Managers and Executives plus relevant stakeholders from across the enterprise. The team will collaborate with stakeholders develop meaningful analyses which drive actionable insights. The team will also need to influence data architecture and may support payments strategy and product development initiatives. In this role, you will: Consult, review and research moderately complex business, operational, and technical challenges that require an in-depth evaluation of variable data factors Perform moderately complex data analysis to support and drive strategic initiatives and business needs Develop a deep understanding of technical systems and business processes to extract data driven insights while identifying opportunities for engineering enhancements Lead or participate on large cross group projects Mentor less experienced staff Collaborate and consult with peers, colleagues, external contractors, and mid-level managers to resolve issues and achieve goals Leverage a solid understanding of compliance and risk management requirements for supported area Develop queries and other programmatic logic to effectively source, aggregate and combine data in line with product / project initiatives. Lead in utilizing data visualization best practices to tell stories that drive both engagement and action; influence partners to integrate data into decision-making, increase data literacy, and build acceptance of tools, programs, and services. Proactively manage risk including measures to minimize inaccuracy and clarify the meaning of data which is delivered Document development processes, key decisions made and any risks or open items. Support transition of code to production environments. Required Qualifications: 5+ years of Data Management, Business Analysis, Analytics, or Project Management experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Degree in a quantitative field such as applied math, accounting, engineering, finance, economics, econometrics, computer sciences, or business/social and behavioral sciences with a quantitative emphasis 5+ years of experience in an environment with large relational databases or distributed systems, such as SQL Server, Oracle, DB2, Teradata, or Hive 5+ years of experience developing complex SQL queries and Python coding environments 5+ years of experience developing structured visualization solutions and automation tools in Tableau, Power BI, and/or Alteryx Experience in maintaining and creating efficient data solutions to support reporting Good to have experience in treasury management products (wires, ACH, instant payments, etc), correspondent banking, and/or commercial banking Partner management experience with ability to effectively drive results, provide feedback/direction, and manage and build relationships with leaders and team members in a geographically dispersed team environment Excellent verbal, written and interpersonal communication skills Strong analytical skills with high attention to detail and accuracy Have a deep technical and nuanced understanding of various Payment rails. Be an expert in WIRES Payments and clearing. Highly collaborative and inclusive approach with proven track record of delivering acknowledged value. Experience with master reference data such as Customer hierarchies, legal entities, and product taxonomies Knowledge and understanding of relational and no-SQL databases, structured, semi-structured, and unstructured data Job Expectations: Develop and refine business intelligence and data visualizations using Tableau and/or Power BI Maintain partner relationships with wires and clearing product teams, ensuring high quality team deliverables and SLAs Perform strategic ad hoc data analyses to support key business decisions Source data from across multiple systems using a variety of database and distributed applications (Hive, Teradata, SQL Server, Oracle) Provide support/generate and distribute existing management information systems (MIS) reports in accurate and timely manner Maintain reference data that supports the entire insights and reporting processes Develops MIS documentation to allow for smooth operations and easy system maintenance Support automation by identifying the opportunities on existing process and reports Create best-practice reports based on data mining, analysis, and visualization Identify trends and opportunities for growth through analysis of complex datasets Provide feedback and present ideas for improving or implementing processes and tools within the analytics group Quantify success of business opportunities, sales campaigns and initiatives; track progress towards revenue and profitability goals and targets Solve complex business problems by using analytical techniques and tools.
Posted 2 weeks ago
8.0 - 13.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills
Posted 2 weeks ago
7.0 - 12.0 years
9 - 15 Lacs
Bengaluru
Work from Office
We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Job Description: Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django .
Posted 2 weeks ago
7.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. Youll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. We’re a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Senior Digital Engineer. Responsibilities may include the following and other duties may be assigned: Expertise in translating conceptual needs and business requirements into finalized architectural design. Able to manage large projects or processes that span across other collaborative teams both within and beyond Digital Technology. Operate autonomously to defines, describe, diagram and document the role and interaction of the high-level technological and human components that combine to provide cost effective and innovative solutions to meet evolving business needs. Promotes, guides and governs good architectural practice through the application of well-defined, proven technology and human interaction patterns and through architecture mentorship. Responsible for designing, developing, and maintaining scalable data pipelines, preferably using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Deep experience supporting the full lifecycle management of the entire IT portfolio including the selection, appropriate usage, enhancement and replacement of information technology applications, infrastructure and services. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. The ideal candidate is exposed to the fast-paced world of Big Data technology and has experience in building ETL/ELT data solutions using new and emerging technologies while maintaining stability of the platform. Required Knowledge and Experience: Have strong programming knowledge in Java, Scala, or Python or PySpark, SQL. 4-8 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Have experience in designing and building Enterprise Data solutions on AWS Cloud or Azure, or Google Cloud Platform (GCP). Experience with big data technologies such as Hadoop, Hive, and Kafka. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google Big Query. Familiarity with data lake architectures and data storage solutions. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Regards, Ashwini Ukekar Sourcing Specialist
Posted 2 weeks ago
4.0 - 7.0 years
6 - 12 Lacs
Pune
Hybrid
A Day in the Life Were a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Digital Engineer. We are working to improve how healthcare addresses the needs of more people, in more ways and in more places around the world. As a PySpark Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines using PySpark. You will work closely with data scientists, analysts, and other stakeholders to ensure the efficient processing and analysis of large datasets, while handling complex transformations and aggregations. Responsibilities may include the following and other duties may be assigned: Design, develop, and maintain scalable and efficient ETL pipelines using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Collaborate with data scientists and analysts to understand data requirements, review Business Requirement documents and deliver high-quality datasets. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. Stay up to date with the latest trends and technologies in big data and distributed computing. Required Knowledge and Experience: Bachelors degree in computer science, Engineering, or a related field. 4-5 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google BigQuery. Familiarity with data lake architectures and data storage solutions. Experience with big data technologies such as Hadoop, Hive, and Kafka. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Preferred Skills: Experience with Databricks. Experience with orchestration tools like Apache Airflow or AWS Step Functions. Knowledge of machine learning workflows and experience working with data scientists. Understanding of data security and governance best practices. Familiarity with streaming data platforms and real-time data processing. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. If interested, please share your updated CV on ashwini.ukekar@medtronic.com Regards, Ashwin Ukekar Sourcing Specialist Medtronic
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2