Home
Jobs

3786 Hadoop Jobs - Page 33

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

2 - 2 Lacs

Gurgaon

On-site

GlassDoor logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Data Scientist Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team: As consumer preference for digital payments continues to grow, ensuring a seamless and secure consumer experience is top of mind. Optimization Soltions team focuses on tracking of digital performance across all products and regions, understanding the factors influencing performance and the broader industry landscape. This includes delivering data-driven insights and business recommendations, engaging directly with key external stakeholders on implementing optimization solutions (new and existing), and partnering across the organization to drive alignment and ensure action is taken. Are you excited about Data Assets and the value they bring to an organization? Are you an evangelist for data-driven decision-making? Are you motivated to be part of a team that builds large-scale Analytical Capabilities supporting end users across 6 continents? Do you want to be the go-to resource for data science & analytics in the company? The Role: Work closely with global optimization solutions team to architect, develop, and maintain advanced reporting and data visualization capabilities on large volumes of data to support data insights and analytical needs across products, markets, and services The candidate for this position will focus on Building solutions using Machine Learning and creating actionable insights to support product optimization and sales enablement. Prototype new algorithms, experiment, evaluate and deliver actionable insights. Drive the evolution of products with an impact focused on data science and engineering. Designing machine learning systems and self-running artificial intelligence (AI) software to automate predictive models. Perform data ingestion, aggregation, and processing on high volume and high dimensionality data to drive and enable data unification and produce relevant insights. Continuously innovate and determine new approaches, tools, techniques & technologies to solve business problems and generate business insights & recommendations. Apply knowledge of metrics, measurements, and benchmarking to complex and demanding solutions. All about You A superior academic record at a leading university in Computer Science, Data Science, Technology, mathematics, statistics, or a related field or equivalent work experience Experience in data management, data mining, data analytics, data reporting, data product development and quantitative analysis Strong analytical skills with track record of translating data into compelling insights Prior experience working in a product development role. knowledge of ML frameworks, libraries, data structures, data modeling, and software architecture. proficiency in using Python/Spark, Hadoop platforms & tools (Hive, Impala, Airflow, NiFi), and SQL to build Big Data products & platforms Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI is a plus. Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Ability to build a strong narrative on the business value of products and actively participate in sales enablement efforts. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 - 0 Lacs

Delhi

On-site

GlassDoor logo

Job Profile* *Role:* AI Developer *Location:* Delhi *Experience:* 6-10 Years *Qualifications:* 1. Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 2. Proven experience of 6-10 years as an AI Developer or similar role. 3. Proficient in coding and ability to develop and implement AI models and algorithms from scratch. 4. Strong knowledge of AI frameworks and libraries. 5. Proficiency in data manipulation and analysis methods. 6. Excellent problem-solving abilities and attention to detail. 7. Good communication and teamwork skills. *Responsibilities:* 1. Implement AI solutions that seamlessly integrate with existing business systems to enhance functionality and user interaction. 2. Manage the data flow and infrastructure for the effective functioning of the AI Department. 3. Design, develop, and implement AI models and algorithms from scratch. 4. Collaborate with the IT team to ensure the successful deployment of AI models. 5. Continuously research and implement new AI technologies to improve existing systems. 6. Maintain up-to-date knowledge of AI and machine learning trends and advancements. 7. Provide technical guidance and support to the team as needed. *Coding Knowledge Required:* 1. Proficiency in programming languages like Python, Java, R, etc. 2. Experience with machine learning frameworks like TensorFlow or PyTorch. 3. Knowledge of cloud platforms like AWS, Google Cloud, or Azure. 4. Familiarity with databases, both SQL and NoSQL. 5. Understanding of data structures, data modeling, and software architecture. 6. Experience with distributed data/computing tools like Hadoop, Hive, Spark, etc. Job Type: Full-time Pay: ₹14,214.66 - ₹66,535.00 per month Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

5.0 - 6.0 years

5 - 10 Lacs

India

On-site

GlassDoor logo

Job Summary: We are seeking a highly skilled Python Developer to join our team. Key Responsibilities: Design, develop, and deploy Python applications Work independently on machine learning model development, evaluation, and optimization. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 01/07/2025

Posted 1 week ago

Apply

8.0 years

6 - 8 Lacs

Chennai

On-site

GlassDoor logo

Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies. Implement version control and CI/CD pipelines for Spark applications. Required Skills & Experience: Minimum 8 years of experience in application development. Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing. Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop. Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi. Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data. Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL. Knowledge of data warehousing concepts, dimensional modeling, and data lakes. Ability to troubleshoot and optimize Spark and Cloudera platform performance. Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).

Posted 1 week ago

Apply

3.0 years

4 - 7 Lacs

Chennai

On-site

GlassDoor logo

- 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities: - Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business - Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms - Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation - Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture - Design, build and own all the components of a high-volume data warehouse end to end. - Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) - Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers - Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources - Own the functional and nonfunctional scaling of software systems in your ownership area. - Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About the team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Linkedin logo

Job Responsibilities Develop teaching materials including exercises & assignments Conduct classroom training / virtual training Design assessments for various proficiency levels in a given competency Enhance course material & course delivery based on feedback to improve training effectiveness Gather feedback from stakeholders, identify actions based on feedback and implement changes Location: Mysore, Mangalore, Bangalore, Chennai, Pune, Hyderabad, Chandigarh Description of the Profile We are looking for trainers with 7 to 12 years of teaching experience and technology know-how in one or more of the following areas: Java – Java programming, Spring, Angular / React, Bootstrap Microsoft – C# programming, SQL Server, ADO.NET, ASP.NET, MVC design pattern, Azure, MS Power platforms, MS Dynamics 365 CRM, MS Dynamics 365 ERP, SharePoint Testing – Selenium, Microfocus UFT, Microfocus-ALM tools, SOA testing, SOAPUI, Rest assured, Appium Big Data – Python programming, Hadoop, Spark, Scala, Mongo DB, NoSQL SAP – SAP ABAP programming / SAP MM / SAP SD /SAP BI / SAP S4 HANA Oracle – Oracle E-Business Suite (EBS) / PeopleSoft / Siebel CRM / Oracle Cloud / OBIEE / Fusion Middleware API and integration – API, Microservices, TIBCO, APIGee, Mule Digital Commerce – SalesForce, Adobe Experience Manager Digital Process Automation PEGA, Appian, Camunda, Unqork, UIPath MEAN / MERN stacks Business Intelligence – SQL Server, ETL using SQL Server, Analysis using SQL Server, Enterprise reporting using SQL, Visualization Data Science – Python for data science, Machine learning, Exploratory data analysis, Statistics & Probability Cloud & Infrastructure Management – Network administration / Database administration / Windows administration / Linux administration / Middleware administration / End User Computing / ServiceNow Cloud platforms like AWS / GCP/ Azure / Oracle Cloud, Virtualization Cybersecurity Infra Security / Identity & Access Management / Application Security / Governance & Risk Compliance / Network Security Mainframe – COBOL, DB2, CICS, JCL Open source – Python, PHP, Unix / Linux, MySQL, Apache, HTML5, CSS3, JavaScript Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Syniverse is the world’s most connected company. Whether we’re developing the technology that enables intelligent cars to safely react to traffic changes or freeing travelers to explore by keeping their devices online wherever they go, we believe in leading the world forward. Which is why we work with some of the world’s most recognized brands. Eight of the top 10 banks. Four of the top 5 global technology companies. Over 900 communications providers. And how we’re able to provide our incredible talent with an innovative culture and great benefits. Who We're Looking For As a Lead Quality Assurance engineer you will be in charge of designing and developing the QA management systems and tools of the organization. Define test requirements and automate test procedures to help create and maintain an exceptional user experience for Syniverse’s customers. The ideal candidate will be responsible for conducting tests before product launches to ensure software runs smoothly and meets client needs, while being cost-effective. Some Of What You'll Do Duties and responsibilities: Ability to work from Jira requirements to write test plans. Provide input to team leads, manager and stakeholders as needed to manage test plans and schedules. Assign tasks and track Project deliverables. Consult with Development to identify test data. Properly diagnose test results and document product defects. Execute test scripts and cases and initiate modifications if necessary. Provide daily test statuses on testing progress and issues. Actively participate in product and project team meetings. Keep abreast of business needs and stay current with technology trends. ETL Testing Design and execute comprehensive test plans and test cases to validate ETL processes. Ensure data extraction, transformation, and loading operations meet business requirements and data quality standards. Identify and report data anomalies, discrepancies, and inconsistencies. BI Testing Design and execute the BI test cases to validate the Reports and Dashboards Data Validation Develop and maintain data validation scripts and procedures to verify data integrity. Perform data reconciliation between source and target systems. Validate data transformations, aggregations, and calculations. Performance Testing Conduct performance and scalability testing of ETL processes to ensure optimal data flow. Identify bottlenecks and optimize ETL workflows for efficiency. Regression Testing Establish and maintain regression test suites to prevent regressions in ETL pipelines. Automate regression testing where possible to streamline validation processes. Documentation Document test cases, test results, and testing procedures. Maintain documentation for ETL processes and data mappings. Error and Defects are created in Jira and fully documented including description, steps to recreate, and attached failed test collateral. Collaboration Collaborate with data engineers, data analysts, and business stakeholders to understand data requirements and business logic. Work closely with the development team to ensure ETL code changes are tested thoroughly. Issue Resolution Investigate and troubleshoot data-related issues and defects. Work with the development team to resolve identified problems. Requirements 7-12 years Software Engineering experience. Experience with big data technologies: Hadoop Impala, Kafka & knowledge on flink jobs Expertise in formal software testing methodologies. Scripting and automated software testing tools experience is a plus. 3+ years’ experience working with industry standard testing tools like JMeter, Zephyr, Cucumber, Postman, and others. Strong understanding of platforms (UNIX experience preferred). Programming knowledge in any scripting or programming language is a plus. Jira knowledge preferred. Qualifications Bachelor’s degree in computer science, Information Technology, or related field. Proven experience as a Data ETL / BI Test Engineer or similar role. Proficiency in SQL for data validation and querying. Good understanding of ETL processes and data warehousing concepts. Knowledge of data quality best practices and testing methodologies. Working knowledge of DB triggers and procedures is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Knowledge of data governance and data privacy regulations. Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Familiarity with version control systems (e.g., Git). Certification in software testing (e.g., ISTQB) is a plus. Why You Should Join Us Join us as we write a new chapter, guided by world-class leadership. Come be a part of an exciting and growing organization where we offer a competitive total compensation, flexible/remote work and with a leadership team committed to fostering an inclusive, collaborative, and transparent organizational culture. At Syniverse connectedness is at the core of our business. We believe diversity, equity, and inclusion among our employees is crucial to our success as a global company as we seek to recruit, develop, and retain the most talented people who want to help us connect the world. Know someone at Syniverse? Be sure to have them submit you as a referral prior to applying for this position. Show more Show less

Posted 1 week ago

Apply

12.0 years

5 - 6 Lacs

Noida

On-site

GlassDoor logo

Description Job Title: Solution Architect Designation : Senior Company: Hitachi Rail GTS India Location: Noida, UP, India Salary: As per Industry Company Overview: Hitachi Rail is right at the forefront of the global mobility sector following the acquisition. The closing strengthens the company's strategic focus on helping current and potential Hitachi Rail and GTS customers through the sustainable mobility transition – the shift of people from private to sustainable public transport, driven by digitalization. Position Overview: We are looking for a Solution Architect that will be responsible for translating business requirements into technical solutions, ensuring the architecture is scalable, secure, and aligned with enterprise standards. Solution Architect will play a crucial role in defining the architecture and technical direction of the existing system. you will be responsible for the design, implementation, and deployment of solutions that integrate with transit infrastructure, ensuring seamless fare collection, real-time transaction processing, and enhanced user experiences. You will collaborate with development teams, stakeholders, and external partners to create scalable, secure, and highly available software solutions. Job Roles & Responsibilities: Architectural Design : Develop architectural documentation such as solution blueprints, high-level designs, and integration diagrams. Lead the design of the system's architecture, ensuring scalability, security, and high availability. Ensure the architecture aligns with the company's strategic goals and future vision for public transit technologies. Technology Strategy : Select the appropriate technology stack and tools to meet both functional and non-functional requirements, considering performance, cost, and long-term sustainability. System Integration : Work closely with teams to design and implement the integration of the AFC system with various third-party systems (e.g., payment gateways, backend services, cloud infrastructure). API Design & Management : Define standards for APIs to ensure easy integration with external systems, such as mobile applications, ticketing systems, and payment providers. Security & Compliance : Ensure that the AFC system meets the highest standards of data security, particularly for payment information, and complies with industry regulations (e.g., PCI-DSS, GDPR). Stakeholder Collaboration : Act as the technical lead during project planning and discussions, ensuring the design meets customer and business needs. Technical Leadership : Mentor and guide development teams through best practices in software development and architectural principles. Performance Optimization : Monitor and optimize system performance to ensure the AFC system can handle high volumes of transactions without compromise. Documentation & Quality Assurance : Maintain detailed architecture documentation, including design patterns, data flow, and integration points. Ensure the implementation follows best practices and quality standards. Research & Innovation : Stay up to date with the latest advancements in technology and propose innovative solutions to enhance the AFC system. Skills: 1. Equipment Programming Languages DotNet (C#), C/C++, Java, Python 2. Web Development Frameworks ASP.NET Core (C#), Angular 3. Microservices & Architecture Spring Cloud, Docker, Kubernetes, Istio, Apache Kafka, RabbitMQ, Consul, GraphQL 4. Cloud Platforms Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Kubernetes on Cloud (e.g., AWS EKS, GCP GKE) Terraform (Infrastructure as Code) 5. Databases Relational Databases (SQL) NoSQL Databases Data Warehousing 6. API Technologies SOAP/RESTful API Design GraphQL gRPC OpenAPI / Swagger (API Documentation) 7. Security Technologies OAuth2 / OpenID Connect (Authentication & Authorization) JWT (JSON Web Tokens) SSL/TLS Encryption OWASP Top 10 (Security Best Practices) Vault (Secret Management) Keycloak (Identity & Access Management) 8. Design & Architecture Tools UML (Unified Modeling Language) Lucidchart / Draw.io (Diagramming) PlantUML (Text-based UML generation) C4 Model (Software architecture model) Enterprise Architect (Modeling) 9. Miscellaneous Tools & Frameworks Apache Hadoop / Spark (Big Data) Elasticsearch (Search Engine) Apache Kafka (Stream Processing) TensorFlow / PyTorch (Machine Learning/AI) Redis (Caching & Pub/Sub) DevOps & CI/CD Tools Education: Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Experience Required: 12+ years of experience in solution architecture or software design. Proven experience with enterprise architecture frameworks (e.g., TOGAF, Zachman). Strong understanding of cloud platforms (AWS, Azure, or Google Cloud). Experience in system integration, API design, microservices, and SOA. Familiarity with data modeling and database technologies (SQL, NoSQL). Strong communication and stakeholder management skills. Preferred: Certification in cloud architecture (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Experience with DevOps tools and CI/CD pipelines. Knowledge of security frameworks and compliance standards (e.g., ISO 27001, GDPR). Experience in Agile/Scrum environments. Domain knowledge in [insert industry: e.g., finance, transportation, healthcare]. Soft Skills: Analytical and strategic thinking. Excellent problem-solving abilities. Ability to lead and mentor cross-functional teams. Strong verbal and written communication.

Posted 1 week ago

Apply

7.0 years

3 - 9 Lacs

Noida

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications: Master’s or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Nic

Posted 1 week ago

Apply

7.0 years

8 - 10 Lacs

Noida

On-site

GlassDoor logo

Clearwater Analytics’ mission is to become the world’s most trusted and comprehensive technology platform for investment reporting, accounting, and analytics. With our team, you will partner with the most sophisticated and innovative institutional investors around the world. If you are infectiously passionate about what you do, intensely committed to clients, and driven by continuous innovation and improvement... We want you to apply! A career in Software Development , will provide you with the opportunity to participate in all phases of the software development lifecycle, including design, implementation, testing and deployment of quality software. With the use of advanced technology, you and your team will work in an agile environment producing designs and code that our customers will use every day . Responsibilities: Developing quality software that is used by some of the world's largest technology firms, fixed income asset managers, and custodian banks Participate in Agile meetings to contribute with development strategies and product roadmap Owning critical processes that are highly available and scalable Producing tremendous feature enhancements and reacting quickly to emerging technologies Encouraging collaboration and stimulating creativity Helping mentor entry-level developers Contributing to design and architectural decisions Providing leadership and expertise to our ever-growing workforce Testing and validating in development and production code that they own, deploy, and monitor Understanding, responding to, and addressing customer issues with empathy and in a timely manner Independently can move a major feature or service through an entire lifecycle of design, development, deployment, and maintenance Deep knowledge in multiple teams' domains; broad understanding of CW systems. Creates documentation of system requirements and behavior across domains Willingly takes on unowned and undesirable work that helps team velocity and quality Is in touch with client needs and understands their usage Consulted on quality, scaling and performance requirements before development on new features begins. Understands, finds, and proposes solutions for systemic problems Leads in the technical breakdown of deliverables and capabilities into features and stories. Expert in unit testing techniques and design for testability, contributes to automated system testing requirements and design Improves code quality and architecture to ensure testability and maintainability Understands, designs, and tests for impact/performance on dependencies and adjacent components and services. Builds and maintains code in the context and awareness of the larger system Helps less experienced engineers troubleshoot and solve problems Active in mentoring and training of others inside and outside their division Requirements: Strong problem-solving skills Experience with an object-oriented, or functional language Bachelor’s degree in Computer Science or related field Strong problem-solving skills 7+ years professional experience in industry-leading programming languages (Java/Python). Background in SDLC & Agile practices. Experience in monitoring production systems. Experience with Machine Learning Experience working with Cloud Platforms (AWS/Azure/GCP). Experience working with messaging systems such as Cloud Pub/Sub, Kafka, or SQS/SNS. Must be able to communicate (speak, read, comprehend, write in English). Desired Experience or Skills: Ability to build scalable backend services (Microservices, polyglot storage, messaging systems, data processing pipelines). Possess strong analytical skills, with excellent problem-solving abilities in the face of ambiguity. Excellent written and verbal skills. Ability to contribute to software design documentation, presentation, sequence diagrams and present complex technical designs in a concise manner. Professional experience in building distributed software systems, specializing in big data and NoSQL database technologies (Hadoop, Spark, DynamoDB, HBase, Hive, Cassandra, Vertica). Ability to work with relational and NoSQL databases Strong problem-solving skills. Strong organizational, interpersonal, and communication skills. Detail oriented. Motivated, team player.

Posted 1 week ago

Apply

0 years

1 - 10 Lacs

Noida

On-site

GlassDoor logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, optimize, secure, and administer databases Develop and maintain data warehousing solutions Implement ETL processes for data integration and provisioning Manage data stores within Platform-as-a-Service (PaaS) and cloud solutions Provide sizing and configuration assistance for data storage systems Analyze business processes and identify opportunities for data optimization Manage relationships with software and hardware vendors Design schemas and write SQL scripts for data analytics and applications Ensure data quality and governance standards are met Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills SQL: Proficiency in SQL for database management and querying Python: Solid skills in Python for data manipulation and scripting Big Data Technologies: Experience with Hadoop and Spark ETL Tools: Proficiency in ETL tools like SSIS Cloud Platforms: Knowledge of AWS, Azure, and Google Cloud Data Warehousing: Experience with data warehousing solutions Data Visualization: Skills in Power BI and Tableau DevOps Tools: Familiarity with Jenkins, GitHub, and Azure DevOps Unix/Linux: Proficiency in Unix/Linux operating systems Snowflake: Experience with Snowflake for data warehousing Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Certifications in data engineering or related fields Proven experience in data engineering roles with enterprise-scale impact Experience managing data engineering projects end-to-end Solid analytical and problem-solving skills Excellent communication skills for technical and non-technical audiences Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Overview Develop and implement AI-driven reporting solutions to improve data analytics and business intelligence. Collaborate with cross-functional teams to understand reporting requirements and translate them into technical specifications. Design, develop, and maintain interactive dashboards and reports using tools like Power BI, Tableau, or similar. Integrate AI models and algorithms into reporting solutions to provide predictive and prescriptive insights. Optimize data models and reporting processes for performance and scalability. Conduct data analysis to identify trends, patterns, and insights that can drive business decisions. Ensure data accuracy, consistency, and integrity in all reporting solutions. Stay updated with the latest advancements in AI and reporting technologies and apply them to improve existing solutions. Provide training and support to end-users on how to use and interpret AI-driven reports. Consult with the Data & Analytics team and Reporting Factory developers to build the required data infrastructure needed to host and run Reporting Gen Ai solutions Responsibilities Develop and implement AI-driven reporting solutions to improve data analytics and business intelligence. Collaborate with cross-functional teams to understand reporting requirements and translate them into technical specifications. Design, develop, and maintain interactive dashboards and reports using tools like Power BI, Tableau, or similar. Integrate AI models and algorithms into reporting solutions to provide predictive and prescriptive insights. Optimize data models and reporting processes for performance and scalability. Conduct data analysis to identify trends, patterns, and insights that can drive business decisions. Ensure data accuracy, consistency, and integrity in all reporting solutions. Stay updated with the latest advancements in AI and reporting technologies and apply them to improve existing solutions. Provide training and support to end-users on how to use and interpret AI-driven reports. Consult with the Data & Analytics team and Reporting Factory developers to build the required data infrastructure needed to host and run Reporting Gen Ai solutions Qualifications Bachelor’s degree in computer science, Data Science, AI/ML, or a related field. 7-9 years overall experience; 4+ years of professional experience working directly on the design, development and rollout of AI/ML/GenAi Solutions Proven experience in developing AI-driven reporting solutions. Experience with AI and machine learning frameworks like TensorFlow, PyTorch, or similar. Proficiency in programming languages such as Python, R, or SQL. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Experience with Azure cloud platforms like, AWS, or Google Cloud is a plus. Experience / involvement in organization wide digital transformations preferred Knowledge of natural language processing (NLP) and its application in reporting. Experience with big data technologies like Hadoop, Spark, or similar. Familiarity with data warehousing concepts and tools. Understanding of business intelligence and data analytics best practices. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Business Analytics Lead Analyst - HIH - Evernorth About Evernorth : Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Business Analytics Lead Analyst (Dashboarding) The job profile for this position is Business Analytics Lead Analyst. The Customer Experience & Operations Enablement Analytics organization offers solutions that provide data, reporting, and actionable insights to internal/external business partners to improve customer experience, reduce cost, measure business performance, and inform business decisions. The Business Analytics Lead Analyst will be responsible for dashboard and report creation as well as the ability to pull data to meet adhoc measurement needs. The individual will be able to create prototypes of reporting needs, and support manual report/scorecard creation where needed when automated dashboards are not feasible. The analytics lead analyst will be comfortable working directly with the Operations teams to learn about their process and where the data and reporting fits in. Looking for candidates that can work directly with operations team members to understand requirements and do their own development and testing. Responsibilities Include: Using SQL to write queries to answer questions and perform ETL tasks to create datasets. Utilizing Tableau or other similar Data Visualization tools to automate scorecards and reports Using Business Intelligence tools to create self-service reporting for business partners. Conducting self-driven data exploration and documentation of tables, schemas, and tests. Using SQL to query data structures to help inform our business partners. Examining and interpreting the data to discover the weaknesses and identify the root causes Completing ad hoc requests for business partners data needs. Identifying and implementing automation to consolidate similar or repeated ad hoc requests. Understanding business needs to better inform reporting and analytics duties. Giving guidance on any recurring problems or issues Completing proposals in cooperation and conjunction with experts on the subject (SME). Refactoring reporting to enhance performance, provide deeper insight, and answer questions. Updating project documents as well as status reports. Qualifications: Required experience: 5 -8 years of relevant analytics experience with focus on Proficiency with Structured Query Language (SQL) and Oracle. Experience with Business Intelligence Software (Tableau, PowerBI, Looker, etc.) 3-5 years of experience with: Scripting language (Python, Powershell, VBA). Big Data Platforms (Databricks, Hadoop, AWS). Excellent verbal, written and interpersonal communication skills a must. Problem-solving, consulting skills, teamwork, leadership, and creativity skills a must. Analytical mind with outstanding ability to collect and analyze data. Expertise in contact center or workforce planning operations preferred. Proficiency in Agile practices (Jira) preferred. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: ETL Talend Lead Location: Bangalore, Hyderabad, Chennai, Pune Work Mode: Hybrid Job Type: Full-Time Shift Timings: 2:00 - 11:00 PM Years Of Experience: 8 - 15 years ETL Development Lead: Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 12 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Hello Candidates, We are Hiring !! Job Position - Data Streaming Engineer Experience - 5+ years Location - Mumbai, Pune , Chennai , Bangalore Work mode - Hybrid ( 3 days WFO) JOB DESCRIPTION Request for Data Streaming Engineer Data Streaming @ offshore : • Flink , Python Language. • Data Lake Systems. (OLAP Systems). • SQL (should be able to write complex SQL Queries) • Orchestration (Apache Airflow is preferred). • Hadoop (Spark and Hive: Optimization of Spark and Hive apps). • Snowflake (good to have). • Data Quality (good to have). • File Storage (S3 is good to have) NOTE - Candidates can share their resume on - shrutia.talentsketchers@gmail.com

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a Data Solution Architect (Azure; Databricks) . In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Lead Data Scientist to drive AI/ML initiatives and lead a dynamic team. The ideal candidate should have a strong foundation in machine learning, deep learning, and AI technologies, along with hands-on experience in solving complex business problems through data-driven approaches. About the Role We are seeking a highly skilled and motivated Lead Data Scientist to drive AI/ML initiatives and lead a dynamic team. The ideal candidate should have a strong foundation in machine learning, deep learning, and AI technologies, along with hands-on experience in solving complex business problems through data-driven approaches. Responsibilities Lead and mentor a team of data scientists, ensuring effective project execution. Develop and implement advanced AI/ML models to optimize business processes. Analyze large datasets to uncover meaningful insights and drive strategic decisions. Collaborate with cross-functional teams to integrate AI solutions into business operations. Stay ahead of industry trends, emerging technologies, and best practices in AI/ML. Ensure scalable, efficient, and high-performance AI systems. Communicate technical findings to non-technical stakeholders effectively. Qualifications Bachelor’s/Master’s degree in Computer Science, Data Science, AI, or related field. 3+ years of experience in AI/ML, predictive modeling, and data analytics. Expertise in Python, R, TensorFlow, PyTorch, and other ML frameworks. Strong knowledge of deep learning, NLP, and computer vision. Experience in cloud platforms such as AWS, Azure, or GCP. Proven ability to lead and deliver AI/ML-driven projects successfully. Preferred Skills Experience working with big data technologies (Hadoop, Spark, etc.). Familiarity with MLOps and deploying ML models in production. Strong problem-solving, analytical, and communication skills. Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 1 week ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Description: Graviton Research Capital LLP, Gurgaon is looking to hire Software Engineers for our Core Technology team which has some of the best programmers in India working on cutting edge technologies to build a super fast and robust trading infrastructure handling millions of dollars worth of trading transactions every day. As a Senior Software Engineer with Graviton your responsibilities will include: Designing and implementing a high-frequency automated trading system, that trades on multiple exchanges Building live reporting and administration tools for the trading system Performance optimization and improving the overall latency of systems, through algorithm research and using cutting edge tools and techniques End-to-end ownership of modules, including designing, development, deployment and support Growing the team through involvement in the regular hiring process and occasional campus recruitments Requirements : The ideal requirements for our candidates are: A degree in Computer Science 3-5 yrs Experience with C/C++ and object-oriented programming Experience in HFT industry Expertise in algorithms and data structures Excellent problem solving skills Strong communication skills A working knowledge of Linux systems Any of the following is a plus: A good understanding of TCP/IP and Ethernet Knowledge of any other programming language e.g. Java, Scala, Python, bash, Lisp, etc. Familiarity with parallel programming models and parallel algorithms Experience with big data environments e.g. Hadoop, Spark etc. Benefits: Our open and collaborative work culture gives you the freedom to innovate and experiment. Our cubicle free offices, non-hierarchical work culture and insistence to hire the very best creates a melting pot for great ideas and technological innovations. Everyone on the team is approachable, there is nothing better than working with friends! Our perks have you covered. Competitive compensation Annual international team outing Fully covered commuting expenses Best-in-class health insurance Delightful catered breakfasts and lunches A well-stocked kitchen 4 week annual leaves along with market holidays Gym and sports club memberships Regular social events and clubs After work parties Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data Scientist II BANGALORE, KARNATAKA / TECH – DATA SCIENCE / FULL TIME EMPLOYEE About the Team Our Data Science team is the Avengers to our S.H.I.E.L.D 🛡. And why not? We are the ones who assemble during the toughest challenges and devise creative solutions, building intelligent systems for millions of our users looking at a thousand different categories of products. We’ve barely scratched the surface, and have amazing challenges in charting the future of commerce for Bharat. Our typical day involves dealing with fraud detection, inventory optimization, and platform venularization. As Data Scientist, you will navigate uncharted territories with us, discovering new paths to creating solutions for our users.🔍 You will be at the forefront of interesting challenges and solve unique customer problems in an untapped market. But wait – there’s more to us. Our team is huge on having a well-rounded personal and professional life. When we aren't nose-deep in data, you will most likely find us belting “Summer of 69” at the nearest Karaoke bar, or debating who the best Spider-Man is: Maguire, Garfield, or Holland? You tell us ☺️ About the Role Love deep data? Love discussing solutions instead of problems? Then you could be our next Data Scientist. In a nutshell, your primary responsibility will be enhancing the productivity and utilization of the generated data. Other things you will do are: -Work closely with the business stakeholders -Transform scattered pieces of information into valuable data -Share and present your valuable insights with peers What You Will Do Develop models and run experiments to infer insights from hard data Improve our product usability and identify new growth opportunities Understand reseller preferences to provide them with the most relevant products Designing discount programs to help our resellers sell more Help resellers better recognize end-customer preferences to improve their revenue Use data to identify bottlenecks that will help our suppliers meet their SLA requirements Model seasonal demand to predict key organizational metrics Mentor junior data scientists in the team What You Will Need Bachelor's/Master's degree in computer science (or similar degrees) 2-4 years of experience as a Data Scientist in a fast-paced organization, preferably B2C Familiarity with Neural Networks, Machine Learning , etc Familiarity with tools like SQL, R, Python , etc. Strong understanding of Statistics and Linear Algebra Strong understanding of hypothesis/model testing and ability to identify common model testing errors Experience designing and running A/B tests and drawing insights from them Proficiency in machine learning algorithms Excellent analytical skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: -Experience in working on personalization or other ML problems -Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift About It is India’s fastest-growing e-commerce company. We started in 2015 with the idea of helping mom & pop stores to sell online. Today, 5% of Indian households shop with us on any given day. We’ve helped over 15 million individual entrepreneurs start online businesses with zero investment. We’re democratizing internet commerce by offering a 0% commission model for sellers on our platform — a first for India. We aim to become the e-commerce destination for Bharat. We’re currently valued at $4.9 billion with marquee investors supporting our vision. Some of them include Sequoia Capital, Softbank, Fidelity, Proses Ventures, Facebook, and Elevation Capital. We were also featured in Y Combinator’s 2021 Top Companies List and were the only Indian startup to make it to Fast Company’s The World’s 50 Most Innovative Companies in 2020. We ranked 6th in LinkedIn's Top Startups List 2021. Our strongest asset is our people. We have gender-neutral and inclusive policies to promote our people-first culture. Our Mission Democratize internet commerce for everyone Our Vision Enable 100M small businesses in India to succeed online Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for an enthusiastic and technology-proficient Big Data Engineer, who is eager to participate in the design and implementation of a top-notch Big Data solution to be deployed at massive scale. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. On this project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes datasets. Essential functions Participate in design and development of Big Data analytical applications. Design, support and continuously enhance the project code base, continuous integration pipeline, etc. Write complex ETL processes and frameworks for analytics and data management. Implement large-scale near real-time streaming data processing pipelines. Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale. Qualifications Strong coding experience with Scala, Spark,Hive, Hadoop. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams). Understanding of the best practices in data quality and quality engineering. Experience with version control systems, Git in particular. Desire and ability for quick learning of new tools and technologies. Would be a plus Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.). Experience with Github-based development processes. Experience with JVM build systems (SBT, Maven, Gradle). We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

New Delhi, Hyderabad, Gurugram

Work from Office

Naukri logo

Primary Skill – Hadoop, Hive, Python, SQL, Pyspark/Spark. Location –Hyderabad / Gurgaon;

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. As an engineering leader, you will focus on developing the team around you. Bring your technical chops to drive your teams to success around feature delivery and live-site management for a complex cloud infrastructure service. You are as enthusiastic about recruiting and building a team as you are about challenging technical problems that your team will solve. You will also help shape, direct and execute our product vision. You’ll be challenged to blend customer-centric principles, industry-changing innovation, and the reliable delivery of new technologies. You will work directly with engineering, product, and design, to create experiences that reinforce the Salesforce brand by delighting and wowing our customers with highly reliable and available services. Responsibilities Drive the vision of enabling a full suite of Salesforce applications on Google Cloud in collaboration with teams across geographies. Build and lead a team of engineers to deliver cloud framweoks, infrastructure automation tools, workflows, and validation platforms on our public cloud platforms. Solid experience in building and evolving large scale distributed systems to reliably process billions of data points Proactively identify reliability & data quality problems and drive triaging and remediation process. Invest in continuous employee development of a highly technical team by mentoring and coaching engineers and technical leads in the team. Recruit and attract top talent. Drive execution and delivery by collaborating with cross functional teams, architects, product owners and engineers. Experience managing 2+ engineering teams. Experience building services on public cloud platforms like GCP, AWS, Azure Required Skills/Experiences B.S/M.S. in Computer Sciences or equivalent field. 12+ years of relevant experience in software development teams with 5+ years of experience managing teams Passionate, curious, creative, self-starter and approach problems with right methodology and intelligent decisions. Laser focus on impact, balancing effort to value, and getting things done. Experience providing mentorship, technical leadership, and guidance to team members. Strong customer service orientation and a desire to help others succeed. Top notch written and oral communication skills. Desired Skills/Experiences Working knowledge of modern technologies/services on public cloud is desirable Experience with container orchestration systems Kubernetes, Docker, Helios, Fleet Expertise in open source technologies like Elastic Search, Logstash, Kakfa, MongoDB, Hadoop, Spark, Trino/Presto, Hive, Airflow, Splunk Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less

Posted 1 week ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies