Home
Jobs

2510 Hive Jobs - Page 24

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location : Bangalore Business & Team: RBS Decision Science Impact & contribution: The Data Scientist will use technical knowledge and understanding of business domain to own and deliver moderate or highly complex data science projects independently or with minimal guidance. You also will engage and collaborate with business stakeholders to clearly articulate findings to solve business problems. Roles & Responsibilities: Lead data-driven initiatives, from problem formulation to model deployment, leveraging advanced statistical techniques and machine learning algorithms. Drive the development and implementation of scalable data solutions, ensuring accuracy and reliability of predictive models. Collaborate with business stakeholders to define project goals, prioritize tasks, and deliver actionable insights. Design and execute experiments to evaluate model performance and optimize algorithms for maximum efficiency. Develop and deploy production-grade machine learning models in cloud-based and on-prem platforms. Lead cross-functional teams in the design and execution of data science projects, ensuring alignment with business objectives. Stay abreast of emerging technologies and industry trends, continuously enhancing expertise in data science methodologies and tools. Drive innovation by exploring new approaches and techniques for solving complex business problems through data analysis and modelling. Mentor junior team members, providing guidance on best practices and technical skills development. Strongly support the adoption of data science across the organization. Identify problems in the products, services and operations of the bank and solve those with innovative research driven solutions. Essential Skills: Strong hands-on programming experience in Python (mandatory), R, SQL, Hive and Spark. 5+ years of experience in above skills Ability to write well designed, modular and optimized code. Knowledge of H2O.ai, GitHub, Big Data and ML Engineering. Knowledge of Snowflake, AWS, Azure etc. Knowledge of commonly used data structures and algorithms. Solid foundation of Statistics and core ML algorithms at a mathematical (under the hood) level. Must have been part of projects building and deploying predictive models in production (financial services domain preferred) involving large and complex data sets. Experience in Data Science in Pricing, Credit Risk, Marketing, Campaign Analytics, Ecommerce Retail or banking products for retail or business banking is preferred. Good to have: Knowledge of Time Series, NLP and Deep Learning and Generative AI is preferred. Good to have: Knowledge and hands-on experience in developing solutions with Large Language Models. Good to have: familiarity with agentic coding such as Roo code and Cline Built and deployed large scale software applications. Understanding of principles of software engineering and cloud computing. Strong problem solving and critical thinking skills. Curious, fast learning capability and team player attitude is a must. Ability to communicate clearly and effectively. Demonstrated expertise through blogposts, research, participation in competitions, speaking opportunities, patents and paper publications. Most importantly - ability to identify and translate theories into real applications to solve practical problems. Education Qualifications: Bachelor’s degree in Engineering Or Master’s degree Or Ph.D. in Data Science/ Machine Learning/ Computer Science/ Computational Linguistics/ Statistics/ Mathematics/Engineering. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 02/07/2025 Show more Show less

Posted 1 week ago

Apply

7.0 years

8 - 10 Lacs

Noida

On-site

GlassDoor logo

Clearwater Analytics’ mission is to become the world’s most trusted and comprehensive technology platform for investment reporting, accounting, and analytics. With our team, you will partner with the most sophisticated and innovative institutional investors around the world. If you are infectiously passionate about what you do, intensely committed to clients, and driven by continuous innovation and improvement... We want you to apply! A career in Software Development , will provide you with the opportunity to participate in all phases of the software development lifecycle, including design, implementation, testing and deployment of quality software. With the use of advanced technology, you and your team will work in an agile environment producing designs and code that our customers will use every day . Responsibilities: Developing quality software that is used by some of the world's largest technology firms, fixed income asset managers, and custodian banks Participate in Agile meetings to contribute with development strategies and product roadmap Owning critical processes that are highly available and scalable Producing tremendous feature enhancements and reacting quickly to emerging technologies Encouraging collaboration and stimulating creativity Helping mentor entry-level developers Contributing to design and architectural decisions Providing leadership and expertise to our ever-growing workforce Testing and validating in development and production code that they own, deploy, and monitor Understanding, responding to, and addressing customer issues with empathy and in a timely manner Independently can move a major feature or service through an entire lifecycle of design, development, deployment, and maintenance Deep knowledge in multiple teams' domains; broad understanding of CW systems. Creates documentation of system requirements and behavior across domains Willingly takes on unowned and undesirable work that helps team velocity and quality Is in touch with client needs and understands their usage Consulted on quality, scaling and performance requirements before development on new features begins. Understands, finds, and proposes solutions for systemic problems Leads in the technical breakdown of deliverables and capabilities into features and stories. Expert in unit testing techniques and design for testability, contributes to automated system testing requirements and design Improves code quality and architecture to ensure testability and maintainability Understands, designs, and tests for impact/performance on dependencies and adjacent components and services. Builds and maintains code in the context and awareness of the larger system Helps less experienced engineers troubleshoot and solve problems Active in mentoring and training of others inside and outside their division Requirements: Strong problem-solving skills Experience with an object-oriented, or functional language Bachelor’s degree in Computer Science or related field Strong problem-solving skills 7+ years professional experience in industry-leading programming languages (Java/Python). Background in SDLC & Agile practices. Experience in monitoring production systems. Experience with Machine Learning Experience working with Cloud Platforms (AWS/Azure/GCP). Experience working with messaging systems such as Cloud Pub/Sub, Kafka, or SQS/SNS. Must be able to communicate (speak, read, comprehend, write in English). Desired Experience or Skills: Ability to build scalable backend services (Microservices, polyglot storage, messaging systems, data processing pipelines). Possess strong analytical skills, with excellent problem-solving abilities in the face of ambiguity. Excellent written and verbal skills. Ability to contribute to software design documentation, presentation, sequence diagrams and present complex technical designs in a concise manner. Professional experience in building distributed software systems, specializing in big data and NoSQL database technologies (Hadoop, Spark, DynamoDB, HBase, Hive, Cassandra, Vertica). Ability to work with relational and NoSQL databases Strong problem-solving skills. Strong organizational, interpersonal, and communication skills. Detail oriented. Motivated, team player.

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a Data Solution Architect (Azure; Databricks) . In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable Show more Show less

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY_ Consulting _Cloud Testing : Staff The opportunity As a Cloud Test Engineer, you will be responsible for testing cloud Solutions on cloud platform and should ensure Quality of deliverables. You will work closely with Test Lead for the Projects under Test. Testing proficiency in Cloud and cloud platform knowledge either of AWS/Azure/GCP are required for this position. Added advantage to have experience in CI/CD platform, Cloud foundation and cloud data platform. Skills And Attributes For Success Delivery of Testing needs for Cloud Projects. Ability to effectively communicate with team members across geographies effectively Experience in Cloud Infrastructure testing. Sound cloud concepts and ability to suggest options Knowledge in any of the cloud platform (AWS/Azure/GCP). Knowledge in Azure Devops / Jenkins / Pipelines Thorough understanding of Requirements and provide feedback on the requirements. Develop Test Strategy for cloud Projects for various aspects like Platform testing , Application testing , Integration Testing and UAT as needed. Provide inputs for Test Planning aligned with Test Strategy. Perform Test Case design, identify opportunity for Test Automation. Develop Test Cases both Manual and Automation Scripts as required. Ensure Test readiness (Test Environment, Test Data, Tools Licenses etc) Perform Test execution and report the progress. Report defects and liaise with development & other relevant team for defect resolution. Prepare Test Report and provide inputs to Test Lead for Test Sign off/ Closure Provide support in Project meetings/ calls with Client for status reporting. Provide inputs on Test Metrics to Test Lead. Support in Analysis of Metric trends and implementing improvement actions as necessary. Handling changes and conducting Regression Testing Generate Test Summary Reports Co-coordinating Test team members and Development team Interacting with client-side people to solve issues and update status Actively take part in providing Analytics and Advanced Analytics Testing trainings in the company To qualify for the role, you must have BE/BTech/MCA/M.Sc Overall 2 to 6 years of experience in Testing Cloud solutions, minimum 2 years of experience in any of the Cloud solutions built on Azure/AWS/GCP Certifications in cloud area is desirable. Exposure in Spark SQL / Hive QL testing is desirable. Exposure in data migration project from on-premise to cloud platform is desirable. Understanding of business intelligence concepts, architecture & building blocks in areas ETL processing, Datawarehouse, dashboards and analytics. Working experience in scripting languages such as python, java scripts, java. Testing experience in more than one of these areas- Cloud foundation, Devops, Data Quality, ETL, OLAP, Reports Exposure with SQL server or Oracle database and proficiency with SQL scripting. Exposure in backend Testing of Enterprise Applications/ Systems built on different platforms including Microsoft .Net and Sharepoint technologies Exposure in ETL Testing using commercial ETL tools is desirable. Knowledge/ experience in SSRS, Spotfire (SQL Server Reporting Services) and SSIS is desirable. Exposure in Data Transformation Projects, database design concepts & white-box testing is desirable. Ideally, you’ll also have Experience/ exposure to Test Automation and scripting experience in perl & shell is desirable Experience with Test Management and Defect Management tools preferably HP ALM or JIRA Able to contribute as an individual contributor and when required Lead a small Team Able to create Test Strategy & Test Plan for Testing Cloud applications/ solutions that are moderate to complex / high risk Systems Design Test Cases, Test Data and perform Test Execution & Reporting. Should be able to perform Test Management for small Projects as and when required Participate in Defect Triaging and track the defects for resolution/ conclusion Good communication skills (both written & verbal) Good understanding of SDLC, test process in particular Good analytical & problem solving or troubleshooting skills Good understanding of Project Life Cycle and Test Life Cycle. Exposure to CMMi and Process improvement Frameworks is a plus. Should have excellent communication skills & should be able to articulate concisely & clearly Should be ready to do an individual contributor as well as Team Leader role What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for an enthusiastic and technology-proficient Big Data Engineer, who is eager to participate in the design and implementation of a top-notch Big Data solution to be deployed at massive scale. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. On this project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes datasets. Essential functions Participate in design and development of Big Data analytical applications. Design, support and continuously enhance the project code base, continuous integration pipeline, etc. Write complex ETL processes and frameworks for analytics and data management. Implement large-scale near real-time streaming data processing pipelines. Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale. Qualifications Strong coding experience with Scala, Spark,Hive, Hadoop. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams). Understanding of the best practices in data quality and quality engineering. Experience with version control systems, Git in particular. Desire and ability for quick learning of new tools and technologies. Would be a plus Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.). Experience with Github-based development processes. Experience with JVM build systems (SBT, Maven, Gradle). We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

New Delhi, Hyderabad, Gurugram

Work from Office

Naukri logo

Primary Skill – Hadoop, Hive, Python, SQL, Pyspark/Spark. Location –Hyderabad / Gurgaon;

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. As an engineering leader, you will focus on developing the team around you. Bring your technical chops to drive your teams to success around feature delivery and live-site management for a complex cloud infrastructure service. You are as enthusiastic about recruiting and building a team as you are about challenging technical problems that your team will solve. You will also help shape, direct and execute our product vision. You’ll be challenged to blend customer-centric principles, industry-changing innovation, and the reliable delivery of new technologies. You will work directly with engineering, product, and design, to create experiences that reinforce the Salesforce brand by delighting and wowing our customers with highly reliable and available services. Responsibilities Drive the vision of enabling a full suite of Salesforce applications on Google Cloud in collaboration with teams across geographies. Build and lead a team of engineers to deliver cloud framweoks, infrastructure automation tools, workflows, and validation platforms on our public cloud platforms. Solid experience in building and evolving large scale distributed systems to reliably process billions of data points Proactively identify reliability & data quality problems and drive triaging and remediation process. Invest in continuous employee development of a highly technical team by mentoring and coaching engineers and technical leads in the team. Recruit and attract top talent. Drive execution and delivery by collaborating with cross functional teams, architects, product owners and engineers. Experience managing 2+ engineering teams. Experience building services on public cloud platforms like GCP, AWS, Azure Required Skills/Experiences B.S/M.S. in Computer Sciences or equivalent field. 12+ years of relevant experience in software development teams with 5+ years of experience managing teams Passionate, curious, creative, self-starter and approach problems with right methodology and intelligent decisions. Laser focus on impact, balancing effort to value, and getting things done. Experience providing mentorship, technical leadership, and guidance to team members. Strong customer service orientation and a desire to help others succeed. Top notch written and oral communication skills. Desired Skills/Experiences Working knowledge of modern technologies/services on public cloud is desirable Experience with container orchestration systems Kubernetes, Docker, Helios, Fleet Expertise in open source technologies like Elastic Search, Logstash, Kakfa, MongoDB, Hadoop, Spark, Trino/Presto, Hive, Airflow, Splunk Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role Analyse complex datasets and make it consumable using visual storytelling and visualization tools such as reports and dashboards built using approved tools (Tableau, PyDash) You will be responsible for Understands business needs and in depth understanding of Tesco processes- Builds on Tesco processes and knowledge by applying CI tools and techniques. - Responsible for completing tasks and transactions within agreed KPI's- Solves problems by analyzing solution alternatives-Engage with market leaders to understand problems to be solved, translate the business problems to analytical problems, taking ownership of specified analysis and translate the answers back to decision makers in business- Manipulating, analyzing and synthesizing large complex data sets using different sources and ensuring data quality and integrity- Think beyond the ask and develop analysis and reports that will contribute beyond basic asks- Accountable for high quality and timely completion of specified work deliverables and ad-hocs business asks- Write codes that are well detailed, structured, and compute efficient- Drive value delivery through efficiency gain by automating repeatable tasks, report creation or dashboard refresh- Collaborate with colleagues to craft, implement and measure consumption of analysis, reports and dashboards- Contribute to development of knowledge assets and reusable modules on GitHub/Wiki- Understands business needs and in depth understanding of Tesco processes- Responsible for completing tasks and transactions within agreed metrics- Experience in handling high volume, time pressured business asks and ad-hocs requests You will need 2-4 years experience preferred in analysis oriented delivery in any one of domains like retail, cpg, telecom or hospitality and for one of the following functional areas - marketing, supply chain, customer, space range and merchandising, operations, finance or digital will be preferredStrong understanding of Business Decisions, Skills to develop visualizations, self-service dashboards and reports using Tableau & Basic Statistical Concepts (Correlation Analysis and Hyp. Testing), Good Skills to analyze data using Adv Excel, Adv SQL, Hive, Phython, Data Warehousing concepts (Hadoop, Teradata), Automation using alteryx, python Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation Show more Show less

Posted 1 week ago

Apply

8.0 - 15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position Summary... What you'll do... Role: Staff, Data Scientist Experience: 8- 15 years Location: Chennai About EBS team: Enterprise Business Services is invested in building a compact, robust organization that includes service operations and technology solutions for Finance, People, Associate Digital Experience. Our team is responsible for design and development of solution that knows our consumers needs better than ever by predicting what they want based on unconstrained demand, and efficiently unlock strategic growth, economic profit, and wallet share by orchestrating intelligent, connected planning and decisioning across all functions. We interact with multiple teams across the company to provide scalable robust technical solutions. This role will play crucial role in overseeing the planning, execution and delivery of complex projects within team Walmarts Enterprise Business Services (EBS) is a powerhouse of several exceptional teams delivering world-class technology solutions and services making a profound impact at every level of Walmart. As a key part of Walmart Global Tech, our teams set the bar for operational excellence and leverage emerging technology to support millions of customers, associates, and stakeholders worldwide. Each time an associate turns on their laptop, a customer makes a purchase, a new supplier is onboarded, the company closes the books, physical and legal risk is avoided, and when we pay our associates consistently and accurately, that is EBS. Joining EBS means embarking on a journey of limitless growth, relentless innovation, and the chance to set new industry standards that shape the future of Walmart. About Team The data science team at Enterprise Business Services Pillar at Walmart Global Tech focuses on using the latest research in machine learning, statistics, and optimization to solve business problems. We mine data, distill insights, extract information, build analytical models, deploy Machine Learning algorithms, and use the latest algorithms and technology to empower business decision-making. In addition, we work with engineers to build reference architectures and machine learning pipelines in a big data ecosystem to productize our solutions. Advanced analytical algorithms driven by our team will help Walmart to optimize business operations, business practices and change the way our customers shop. The data science community at Walmart Global Tech is active in most of the Hack events, utilizing the petabytes of data at our disposal, to build some of the coolest ideas. All the work we do at Walmart Labs will eventually benefit our operations ; our associates, helping Customers Save Money to Live Better. What You Will Do As a Staff Data Scientist for Walmart Global Tech, you'll have the opportunity to Drive data-derived insights across a wide range of retail ; Finance divisions by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives Direct the gathering of data, assess data validity and synthesize data into large analytics datasets to support project goals Utilize big data analytics and advanced data science techniques to identify trends, patterns, and discrepancies in data. Determine additional data needed to support insights Build and train AI/ML models for replication for future projects Deploy and maintain the data science solutions Communicate recommendations to business partners and influence future plans based on insights Consult with business stakeholders regarding algorithm-based recommendations and be a thought-leader to develop these into business actions. Closely partners with the Senior Manager ; Director of Data Science to drive data science adoption in the domain Guides. data scientists, senior data scientists ; staff data scientists across multiple sub-domains to ensure on-time delivery of ML products Drive efficiency across the domain in terms of DS and ML best practices, ML Ops practices, resource utilization, reusability and multi-tenancy. Lead multiple complex ML products and guide senior tech leads in the domain in efficiently leading their products. Drive synergies across different products in terms of algorithmic innovation and sharing of best practices. Proactive identification of complex business problems that can be solved using advanced ML, finding opportunities and gaps in the current business domain Evaluates proposed business cases for projects and initiatives What You Will Bring Masters with > 10 years OR Ph.D. with > 8 years of relevant experience. Educational qualifications should be Computer Science/Statistics/Mathematics or a related area. Minimum 6 years of experience as a data science technical lead Ability to lead multiple data science projects end to end. Deep experience in building data science solution in areas like fraud prevention, forecasting, shrink and waste reduction, inventory management, recommendation, assortment and price optimization Deep experience in simultaneously leading multiple data science initiatives end to end – from translating business needs to analytical asks, leading the process of building solutions and the eventual act of deployment and maintenance of them Strong experience in machine learning: Classification models, regression models, NLP, Forecasting, Unsupervised models, Optimization, Graph ML, Causal inference, Causal ML, Statistical Learning, experimentation ; Gen-AI In Gen-AI, it is desirable to have experience in embedding generation from training materials, storage and retrieval from Vector Databases, set-up and provisioning of managed LLM gateways, development of Retrieval augmented generation based LLM agents, model selection, iterative prompt engineering and finetuning based on accuracy and user-feedback, monitoring and governance. Ability to scale and deploy data science solutions. Strong Experience with one or more of Python and R. Experience in GCP/Azure Strong Experience in Python, PySpark Google Cloud platform, Vertex AI, Kubeflow, model deployment Strong Experience with big data platforms – Hadoop (Hive, Map Reduce, HQL, Scala) Experience with GPU/CUDA for computational efficiency About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 4 years' experience in an analytics related field. Option 2: Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 2 years' experience in an analytics related field. Option 3: 6 years' experience in an analytics or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... Rmz Millenia Business Park, No 143, Campus 1B (1St -6Th Floor), Dr. Mgr Road, (North Veeranam Salai) Perungudi , India R-2182242 Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Atlassian is looking for a Senior Data Engineer to join our Data Engineering team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale. On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Spark and Airflow. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience. Qualifications As a Senior Data Engineer in the DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases. On Your First Day, We'll Expect You To Have A BS in Computer Science or equivalent experience At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer Strong programming skills (Python, Java or Scala preferred) Experience writing SQL, structuring data, and data storage practices Experience with data modeling Knowledge of data warehousing concepts Experience building data pipelines, platforms Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data Experience in modern software development practices (Agile, TDD, CICD) Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies. A willingness to accept failure, learn and try again An open mind to try solutions that may seem crazy at first Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like) It's Preferred That You Have Experience building self-service tooling and platforms Built and designed Kappa architecture platforms Contributed to open source projects (Ex: Operators in Airflow) Experience with Data Build Tool (DBT) Our Perks & Benefits Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Delhi, Delhi

On-site

Indeed logo

Job Profile* *Role:* AI Developer *Location:* Delhi *Experience:* 6-10 Years *Qualifications:* 1. Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 2. Proven experience of 6-10 years as an AI Developer or similar role. 3. Proficient in coding and ability to develop and implement AI models and algorithms from scratch. 4. Strong knowledge of AI frameworks and libraries. 5. Proficiency in data manipulation and analysis methods. 6. Excellent problem-solving abilities and attention to detail. 7. Good communication and teamwork skills. *Responsibilities:* 1. Implement AI solutions that seamlessly integrate with existing business systems to enhance functionality and user interaction. 2. Manage the data flow and infrastructure for the effective functioning of the AI Department. 3. Design, develop, and implement AI models and algorithms from scratch. 4. Collaborate with the IT team to ensure the successful deployment of AI models. 5. Continuously research and implement new AI technologies to improve existing systems. 6. Maintain up-to-date knowledge of AI and machine learning trends and advancements. 7. Provide technical guidance and support to the team as needed. *Coding Knowledge Required:* 1. Proficiency in programming languages like Python, Java, R, etc. 2. Experience with machine learning frameworks like TensorFlow or PyTorch. 3. Knowledge of cloud platforms like AWS, Google Cloud, or Azure. 4. Familiarity with databases, both SQL and NoSQL. 5. Understanding of data structures, data modeling, and software architecture. 6. Experience with distributed data/computing tools like Hadoop, Hive, Spark, etc. Job Type: Full-time Pay: ₹14,214.66 - ₹66,535.00 per month Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

7.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Hybrid

Naukri logo

Responsibilities of the Candidate : - Be responsible for the design and development of big data solutions. Partner with domain experts, product managers, analysts, and data scientists to develop Big Data pipelines in Hadoop - Be responsible for moving all legacy workloads to a cloud platform - Work with data scientists to build Client pipelines using heterogeneous sources and provide engineering services for data PySpark science applications - Ensure automation through CI/CD across platforms both in cloud and on-premises - Define needs around maintainability, testability, performance, security, quality, and usability for the data platform - Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes - Convert SAS-based pipelines into languages like PySpark, and Scala to execute on Hadoop and non-Hadoop ecosystems - Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance - Apply an in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the entire function. - Produce a detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. - Assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing and reporting control issues with transparency Requirements : - 6+ years of total IT experience - 3+ years of experience with Hadoop (Cloudera)/big data technologies - Knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) - Experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Java Scala or Python. - Experience with Spark programming (Pyspark, Scala, or Java) - Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required. - Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus. - Hand on experience in CI/CD, Scheduling and Scripting - Ensure automation through CI/CD across platforms both in cloud and on-premises - System level understanding - Data structures, algorithms, distributed storage & compute - Can-do attitude on solving complex business problems, good interpersonal and teamwork skills

Posted 1 week ago

Apply

5.0 - 10.0 years

32 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: Ability to design and build Python-based code generation framework and runtime engine by reading Business Rules repository in order to. Requirements: Minimum 5 years of experience in build & deployment of Bigdata applications using SparkSQL, SparkStreaming in Python; Expertise on graph algorithms and advanced recursion techniques; Minimum 5 years of extensive experience in design, build and deployment of Python-based applications; Minimum 3 years of experience in the following: HIVE, YARN, Kafka, HBase, MongoDB; Hands-on experience in generating/parsing XML, JSON documents, and REST API request/responses; Bachelors degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience; Expertise in handling complex large-scale Big Data environments preferably (20Tb+); Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

About the Team Data is at the foundation of DoorDash success. The Data Engineering team builds database solutions for various use cases including reporting, product analytics, marketing optimization and financial reporting. By implementing pipelines, data structures, and data warehouse architectures; this team serves as the foundation for decision-making at DoorDash. About the Role DoorDash is looking for a Senior Data Engineer to be a technical powerhouse to help us scale our data infrastructure, automation and tools to meet growing business needs. You're excited about this opportunity because you will… Work with business partners and stakeholders to understand data requirements Work with engineering, product teams and 3rd parties to collect required data Design, develop and implement large scale, high volume, high performance data models and pipelines for Data Lake and Data Warehouse Develop and implement data quality checks, conduct QA and implement monitoring routines Improve the reliability and scalability of our ETL processes Manage a portfolio of data products that deliver high-quality, trustworthy data Help onboard and support other engineers as they join the team We're excited about you because… 5+ years of professional experience 3+ years experience working in data engineering, business intelligence, or a similar role Proficiency in programming languages such as Python/Java 3+ years of experience in ETL orchestration and workflow management tools like Airflow, Flink, Oozie and Azkaban using AWS/GCP Expert in Database fundamentals, SQL and distributed computing 3+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink. Experience working with Snowflake, Redshift, PostgreSQL and/or other DBMS platforms Excellent communication skills and experience working with technical and non-technical teams Knowledge of reporting tools such as Tableau, Superset and Looker Comfortable working in fast paced environment, self starter and self organizing Ability to think strategically, analyze and interpret market and consumer information You must be located near one of our engineering hubs indicated above Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the job A little about us... LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across 35 countries, LTIMindtree — a Larsen & Toubro Group company — combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more info, please visit www.ltimindtree.com Job Details- We are having a weekend drive for the requirement of a Data Scientist at our Bangalore office. Date - 14th June Experience - 4 to 12 Yrs. Location – LTIMindtree Office Bangalore Whitefield Notice Period - Immediate to 60 Days only Mandatory Skills- Gen-AI, Data Science, Python, RAG and Cloud (AWS/Azure) Secondary - (Any) - Machine Learning, Deep Learning, ChatGPT, Langchain, Prompt, vector stores, RAG, llama, Computer vision, Deep learning, Machine learning, OCR, Transformer, regression, forecasting, classification, hyper parameter tunning, MLOps, Inference, Model training, Model Deployment Generic JD : More than 6 years of experience in Data Engineering, Data Science and AI / ML domain Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience using business intelligence tools (e.g. Tableau, PowerBI) and data frameworks (e.g. Hadoop) Experience in Cloud native skills. Knowledge of SQL and Python; familiarity with Scala, Java or C++ is an asset Analytical mind and business acumen and Strong math skills (e.g. statistics, algebra) Experience with common data science toolkits, such as TensorFlow, KERAs, PyTorch, PANDAs, Microsoft CNTK, NumPy etc. Deep expertise in at least one of these is highly desirable. Experience with NLP, NLG and Large Language Models like – BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Great communication and presentation skills. Should have experience in working in a fast-paced team culture. Experience with AIML and Big Data technologies like – AWS SageMaker, Azure Cognitive Services, Google Colab, Jupyter Notebook, Hadoop, PySpark, HIVE, AWS EMR etc. Experience with NoSQL databases, such as MongoDB, Cassandra, HBase, Vector databases Good understanding of applied statistics skills, such as distributions, statistical testing, regression, etc. Should be a data-oriented person with analytical mind and business acumen. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

As a Software Developer you will work in a constantly evolving environment, due to technological advances and the strategic direction of the organization you work for. You will create, maintain, audit, and improve systems to meet particular needs, often as advised by a systems analyst or architect, testing both hard and software systems to diagnose and resolve system faults. The role also covers writing diagnostic programs and designing and writing code for operating systems and software to ensure efficiency. When required, you will make recommendations for future developments Benefits of Joining Us Challenging Projects : Work on cutting-edge projects and solve complex technical problems. Career Growth : Advance your career quickly and take on leadership roles. Mentorship : Learn from experienced mentors and industry experts. Global Opportunities : Work with clients from around the world and gain international experience. Competitive Compensation : Receive attractive compensation packages and benefits. If you're passionate about technology and want to work on challenging projects with a talented team, becoming an Infosys Power Programmer could be a great career choice. Mandatory Skills AWS Glue, AWS Redshift/Spectrum, S3, API Gateway, Athena, Step and Lambda functions Experience in Extract Transform Load (ETL) and Extract Load & Transform (ELT) data integration pattern. Experience in designing and building data pipelines. Development experience in one or more object-oriented programming languages, preferably Python Job Specs 5+ years of in depth hands on experience of developing, testing, deployment and debugging of Spark Jobs using Scala in Hadoop Platform In depth knowledge of Spark Core, working with RDDs, Spark SQL In depth knowledge on Spark Optimization Techniques and Best practices Good Knowledge of Scala Functional Programming: Try, Option, Future, Collections Good Knowledge of Scala OOPS: Classes, Traits and Objects (Singleton and Companion), Case Classes Good Understanding of Scala Language Features: Type System, Implicit/Givens Hands on experience of working in Hadoop Environment (HDFS/Hive), AWS S3, EMR Python programming skills Working experience on Workflow Orchestration tools like Airflow, Oozie Working with API calls in Scala Understanding and exposure to file formats such as Apache AVRO, Parquet, JSON Good to have knowledge of Protocol Buffers and Geospatial data analytics. Writing Test cases using frameworks such as scalatest. Good Knowledge of Build Tools such as: Gradle & SBT in depth Experience on using GIT, resolving conflicts, working with branches. Good to have worked on some workflow systems as Airflow Strong programming skills using data structures and algorithms. Excellent analytical skills Good communication skills Qualification 7-10 Yrs in the industry BE/B.tech CS or equivalent Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary: We are seeking a skilled and innovative Data Scientist to join our team. The ideal candidate will have hands-on experience in AI/ML model development , Big Data processing , and working with cloud platforms like AWS . You will be responsible for analyzing large-scale datasets, developing machine learning models, and delivering actionable insights to drive business decisions. Key Responsibilities: Design, develop, and deploy machine learning and AI models to solve complex business problems. Work with large datasets using Big Data technologies (Hadoop, Spark, etc.). Build scalable data pipelines and workflows using Python, SQL, and cloud-native tools. Implement data preprocessing, feature engineering, and model tuning techniques. Use AWS services (e.g., S3, SageMaker, Lambda, EMR) for data processing, model training, and deployment. Collaborate with data engineers, analysts, and business stakeholders to define requirements and deliver solutions. Communicate findings clearly through dashboards, reports, or presentations. Required Skills: Strong programming skills in Python and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Experience with Big Data frameworks such as Spark , Hadoop , or Hive . Hands-on experience with AWS cloud services related to data science and ML. Solid understanding of machine learning algorithms, model evaluation, and tuning. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3006789 Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai Desired Experience Range: 5 + Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less

Posted 1 week ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities Core Responsibilities Be hands-on with ETL to build data pipelines to support automated reporting Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, You Will Be Working With Cross-functional Partners From Science, Product, SDEs, Operations And Leadership To Translate Raw Data Into Actionable Insights For Stakeholders, Empowering Them To Make Data-driven Decisions. Some Of The Key Activities Include Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3006419 Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) The Prime Data Engineering & Analytics (PDEA) team is seeking to hire passionate Data Engineers to build and manage the central petabyte-scale data infrastructure supporting the worldwide Prime business operations. At Amazon Prime, understanding customer data is paramount to our success in providing customers with relevant and enticing benefits such as fast free shipping, instant videos, streaming music and free Kindle books in the US and international markets. At Amazon you will be working in one of the world's largest and most complex data environments. You will be part of team that will work with the marketing, retail, finance, analytics, machine learning and technology teams to provide real time data processing solution that give Amazon leadership, marketers, PMs timely, flexible and structured access to customer insights. The team will be responsible for building this platform end to end using latest AWS technologies and software development principles. As a Data Engineer, you will be responsible for leading the architecture, design and development of the data, metrics and reporting platform for Prime. You will architect and implement new and automated Business Intelligence solutions, including big data and new analytical capabilities that support our Development Engineers, Analysts and Retail business stakeholders with timely, actionable data, metrics and reports while satisfying scalability, reliability, accuracy, performance and budget goals and driving automation and operational efficiencies. You will partner with business leaders to drive strategy and prioritize projects and feature sets. You will also write and review business cases and drive the development process from design to release. In addition, you will provide technical leadership and mentoring for a team of highly capable Data Engineers. Responsibilities 1. Own design and execution of end to end projects 2. Own managing WW Prime core services data infrastructure 3. Establish key relationships which span Amazon business units and Business Intelligence teams 4. Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAs Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies