Home
Jobs
Companies
Resume

4871 Hadoop Jobs - Page 50

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

A career in our Advisory Acceleration Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. Years of Experience: Candidates with 4-8 years of hands on experience Position Requirements Must Have : Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions Strong expertise in end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake, Data hub in AWS Proficient in Lambda or Kappa Architectures Should be aware of Data Management concepts and Data Modelling Strong AWS hands-on expertise with a programming background preferably Python/Scala Good knowledge of Big Data frameworks and related technologies - Experience in Hadoop and Spark is mandatory Strong experience in AWS compute services like AWS EMR, Glue and storage services like S3, Redshift & Dynamodb Good experience with any one of the AWS Streaming Services like AWS Kinesis, AWS SQS and AWS MSK Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql and Spark Streaming Strong understanding of DBT ELT Tool, and usage of DBT macros etc Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build and Code Commit Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Good knowledge in AWS Security and AWS Key management Strong understanding of Cloud data migration processes, methods and project lifecycle Good analytical & problem-solving skills Good communication and presentation skills Education : Any Graduate. Good analytical & problem-solving skills Good communication and presentation skills Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.  3+ years direct experience working in IT Infrastructure  2+ years in a customer facing role working with enterprise clients.  Experience with implementing and/or maintaining technical solutions in virtualized environments.  Experience in design, architecture and implementation of Data warehouses, data pipelines and flows.  Experience with developing software code in one or more languages such as Java and Python. Proficient with SQL  Experience designing and deploying large scale distributed data processing systems with one or more technologies such as Oracle, MS SQL Server, MySQL, PostgreSQL, MongoDB, Cassandra, Redis, Hadoop, Spark, HBase, Vertica, Netezza, Teradata, Tableau, Qlik or MicroStrategy.  Customer facing migration experience, including service discovery, assessment, planning, execution, and operations.  Demonstrated excellent communication, presentation, and problem-solving skills.  Mandatory Certifications Required Google Cloud Professional Data Engineer Mandatory skill sets-GCP Architecture/Data Engineering, SQL, Python Preferred Skill Sets-GCP Architecture/Data Engineering, SQL, Python Year of experience required-8-10 years Qualifications-B.E / B.TECH/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field Of Study Required Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

10.0 - 20.0 years

35 - 60 Lacs

Mumbai, India

Work from Office

Naukri logo

Design Full Stack solutions with cloud infrastructure (IAAS, PAAS, SAAS, on Premise, Hybrid Cloud) Support Application and infrastructure design and build as a subject matter expert Implement proof of concepts to demonstrate value of the solution designed Provide consulting support to ensure delivery teams build scalable, extensible, high availability, low latency, and highly usable applications Ensure solutions are aligned with requirements from all stake holders such as Consumers, Business, IT, Security and Compliance Ensure that all Enterprise IT parameters and constraints are considered as part of the design Design an appropriate technical solution to meet business requirements that may involve Hybrid cloud environments including Cloud-native architecture, Microservices, etc. Working knowledge of high availability, low latency end-to-end technology stack is especially important using both physical and virtual load balancing, caching, and scaling technology Awareness of Full stack web development frameworks such as Angular / React / Vue Awareness of relational and no relational / NoSql databases such as MongoDB / MS SQL / Cassandra / Neo4J / DynamoDB Awareness of Data Streaming platforms such as Apache Kafka / Apache Flink / AWS Kinesis Working experience of using AWS Step Functions or Azure Logic Apps with serverless Lambda or Azure Functions Optimizes and incorporates the inputs of specialists in solution design. Establishes the validity of a solution and its components with both short- term and long-term implications. Identifies the scalability options and implications on IT strategy and/or related implications of a solution and includes these in design activities and planning. Build strong professional relationships with key IT and business executives. Be a trusted advisor for Cross functional and Management Teams. Partners effectively with other teams to ensure problem resolution. Provide solutions and advice, create Architectures, PPT. Documents and effectively transfer knowledge to internal and external stakeholders Demonstrates knowledge of public cloud technology & solutions. Applies broad understanding of technical innovations & trends in solving business problems. Manage special projects and strategic initiatives as assigned by management. Implement and assist in developing policies for Information Security, and Environmental compliance, ensuring the highest standards are maintained. Ensure adherence to SLAs with internal and external customers and compliance with Information Security Policies, including risk assessments and procedure reviews.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description As a Data Engineer you will be working on building and maintaining complex data pipelines, assemble large and complex datasets to generate business insights and to enable data driven decision making and support the rapidly growing and dynamic business demand for data. You will have an opportunity to collaborate and work with various teams of Business analysts, Managers, Software Dev Engineers, and Data Engineers to determine how best to design, implement and support solutions. You will be challenged and provided with tremendous growth opportunity in a customer facing, fast paced, agile environment. Key job responsibilities Design, implement and support an analytical data platform solutions for data driven decisions and insights Design data schema and operate internal data warehouses & SQL/NOSQL database systems Work on different data model designs, architecture, implementation, discussions and optimizations Interface with other teams to extract, transform, and load data from a wide variety of data sources using AWS big data technologies like EMR, RedShift, Elastic Search etc. Work on different AWS technologies such as S3, RedShift, Lambda, Glue, etc.. and Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency Work on data lake platform and different components in the data lake such as Hadoop, Amazon S3 etc. Work on SQL technologies on Hadoop such as Spark, Hive, Impala etc.. Help continually improve ongoing analysis processes, optimizing or simplifying self-service support for customers Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Enjoy working closely with your peers in a group of talented engineers and gain knowledge. Be enthusiastic about building deep domain knowledge on various Amazon’s business domains. Own the development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2983400 Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

Experience Required: 7+ years Working Days: Monday to Friday Budget: Up to 18LPA We are seeking a seasoned Data Analyst with 7–10 years of experience in data transformation, reporting, and analysis within the Insurance, Finance, or Banking domain . The ideal candidate will possess strong technical expertise in Power BI report development, data modeling, predictive analytics, and machine learning, along with a deep understanding of data lifecycle management and performance monitoring. Key Responsibilities: Transform raw data into actionable insights that support strategic business decisions. Manage the end-to-end lifecycle of data analysis projects, from requirement gathering to implementation and coordination. Develop, refine, and maintain advanced reports and dashboards using Power BI and other visualization tools. Monitor key performance metrics to ensure system optimization and identify areas for improvement. Ensure data accuracy and consistency by implementing robust quality control measures. Analyze and interpret complex datasets to identify trends, patterns, and insights for informed decision-making. Collaborate with cross-functional teams to align data strategies with business goals. Recommend and implement improvements to streamline data collection, transformation, and analysis processes. Monitor and enhance the performance of data management systems continuously. Maintain and update a centralized repository of all data-related artifacts, tools, and procedures. Perform additional functions as assigned to support data initiatives. Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, Mathematics, Industrial Engineering, or Management. Domain experience in Insurance, Financial Services, or Banking is mandatory. Technical Skills and Expertise: Expert-level proficiency in Power BI report development. Hands-on experience with database management systems (Oracle, Microsoft SQL Server). Proficiency in UI and query tools. Familiarity with Agile development methodologies. Experience with predictive modeling, natural language processing (NLP), and text analytics. Skilled in data modeling tools (e.g., ERWin, Enterprise Architect, Visio). Proficient in data mining and ETL processes. Working knowledge of UNIX, Linux, Solaris, and MS Windows. Experience with Hadoop and NoSQL databases. Strong data visualization skills. Prior experience working in the insurance domain is essential. Hands-on experience with machine learning models and applications. Key Competencies: Strong analytical thinking and problem-solving skills. Excellent verbal and written communication. Up-to-date knowledge of emerging tools and trends in data analytics. Ability to work independently and collaboratively in cross-functional teams. For quick Response, please fill out the form Job Application Form https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-217305 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 05, 2025 CATEGORY: Information Systems Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Associate Full Stack Software Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, data engineers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. You will play a key role in a regulatory submission content automation initiative which will modernize and digitize the regulatory submission process, positioning Amgen as a leader in regulatory innovation. The initiative demonstrates innovative technologies, including Generative AI, Structured Content Management, and integrated data to automate the creation, and management of regulatory content. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Ensure code quality and consistency to standard methodologies Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team, and other collaborators Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and implement unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently What we expect of you We are all different, yet we all use our unique contributions to serve patients. Master’s degree and 1 to 3 years of experience in Computer Science, IT or related field OR Bachelor’s degree and 3 to 5 years of experience in Computer Science, IT or related field OR Diploma and 7 to 9 years of experience in Computer Science, IT or related field Preferred Qualifications: Functional Skills: Must-Have Skills: Proficiency in Python/PySpark development, Fast API, PostgreSQL, Databricks, DevOps Tools, CI/CD, Data Ingestion. Candidates should be able to write clean, efficient, and maintainable code. Knowledge of HTML, CSS, and JavaScript, along with popular front-end frameworks like React or Angular, is required to build interactive and responsive web applications In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Solid understanding of cloud computing principles, particularly within the AWS ecosystem Solid understanding of software development methodologies, including Agile and Scrum Experience with version control systems like Git Hands on experience with various cloud services, understand pros and cons of various cloud service in well architected cloud design principles Strong problem solving, analytical skills; Ability to learn quickly; Good communication and interpersonal skills Experienced with API integration, serverless, microservices architecture. Experience in SQL/NOSQL database, vector database for large language models Good-to-Have Skills: Solid understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Hadoop, Spark, or similar Soft Skills: Excellent analytical and solving skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

Req ID: 321816 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a DevOps Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). "NTT DATA Services currently seeks DevOps Developers to join our team in Hyderabad Required Skills: DevOps Engineer 5+ years of hands-on experience with Big Data Technologies and Cloudera cluster implementation or administration Primary Skill: 5+ years of professional experience in Python development and CI/CD Integration Primary Skill: 3+ years Python and/or scripting experience related to automation and APIs Primary Skill: 3+ years of Ansible automation experience Primary Skill: Strong knowledge of Enterprise Linux OS in security and configuration Experience in containerization technology, deployments, monitoring, automation etc. 3+ years of hands-on experience integrating cluster metrics with Grafana or similar. Strong understanding and experience with distributed data platforms and big data eco system (eg. Hadoop, Hive, Spark) Ability to work independently and collaborate effectively within cross functional teams Strong communication and documentation skills Familiarity with RESTful APIs and web services. Knowledge of database systems (SQL and NoSQL). Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities." About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 week ago

Apply

10.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: 1. Spark Application Development Design, develop, and optimize distributed data processing pipelines using Apache Spark. Implement ETL processes for large-scale data ingestion, transformation, and storage. Write efficient, maintainable, and scalable Spark jobs in PySpark, Scala, or Java. Collaborate with data engineers and analysts to ensure data quality and reliability. 2. DevOps Leadership: Design and implement CI/CD pipelines for Spark applications and big data workflows. Automate deployment, monitoring, and scaling of Spark jobs in cloud or on-premises environments. Manage infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. Ensure system reliability, availability, and performance through proactive monitoring and alerting. 3. Technical Leadership Lead a team of developers and DevOps engineers, providing technical guidance and mentorship. Define best practices for Spark development, DevOps, and cloud-native architectures. Conduct code reviews, enforce coding standards, and ensure adherence to project timelines. Collaborate with stakeholders to gather requirements and translate them into technical solutions. 4. Cloud and Big Data Ecosystem Work with cloud platforms (AWS, Azure, GCP) to deploy and manage Spark applications. Integrate Spark with big data tools like Hadoop, Hive, Kafka, and Delta Lake. Optimize resource usage and cost efficiency in cloud-based Spark clusters. Requirements To be successful in this role, you should meet the following requirements: 1)- Technical Expertise: Strong experience with Apache Spark 3.x, Delta Lake (PySpark, Scala, or Java). Proficiency in big data technologies (Hadoop, Hive, Kafka, etc.). Hands-on experience with CI/CD tools (Jenkins, GitHub, Ansible CI/CD, etc.). Knowledge of containerization and orchestration (Docker, Kubernetes). Experience with cloud platforms (AWS EMR, Azure Databricks, GCP DataProc). 2) DevOps Skills: Expertise in infrastructure automation tools (Terraform, Ansible, etc.). Strong understanding of monitoring and logging tools (Prometheus, Grafana, Splunk ). Experience with version control systems (Git) and branching strategies. 3) Leadership and Communication: o Proven experience leading development and DevOps teams. o Strong problem-solving skills and ability to make technical decisions. o Excellent communication and collaboration skills. Preferred Qualifications: o Experience with real-time data processing frameworks (e.g., Spark Streaming, Flink). o Knowledge of data lake architectures and Delta Lake. o Certifications in cloud platforms (AWS, Azure, GCP). o Familiarity with Agile/Scrum methodologies. ? Education and Experience: o Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. o 10+ years of experience in Spark application development and DevOps. o 3+ years of experience in a technical leadership role. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 week ago

Apply

14.0 years

1 - 8 Lacs

Hyderābād

On-site

Job Description: Lead Data Engineer Job Description About AT&T Chief Data Office The Chief Data Office (CDO) at AT&T is responsible for leveraging data as a strategic asset to drive business value. The team focuses on data governance, data engineering, artificial intelligence, and advanced analytics to enhance customer experience, optimize operations, and enable innovation. Candidates will: Work on cutting-edge Cloud Technologies, AI/ML, and data-driven solutions, be a part of a dynamic and innovative team driving digital transformation. Lead high-impact Agile initiatives with top talent in the industry. Get opportunity to grow and implement Agile at an enterprise level. Offered competitive compensation, flexible work culture, and learning opportunities. Shift timing (if any): 12.30 to 9.30 IST(Bangalore)/1:00-10:00 pm (Hyderabad) Work mode: Hybrid (3 days mandatory in office) Location / Additional Location (if any): Bangalore, Hyderabad Job Title / Advertise Job Title: Lead Data Engineer Roles and Responsibilities Create product roadmap and project plan. Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into Cloud platforms. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists/architects and analysts to understand the needs for data and create effective data workflows. Exposure to Snowflake Warehouse. Big Data Engineer with solid background with the larger Hadoop ecosystem and real-time analytics tools including PySpark/Scala-Spark/Hive/Hadoop CLI/MapReduce/Storm/Kafka/Lambda Architecture Implementing data validation and cleansing techniques. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory Azure Data Lake, Snowflake, Pyspark is required. Good to have exp in full Stack Development background with Java and JavaScript/CSS/HTML. Knowledge of ReactJs/Angular is a plus. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable. PL/SQL, RDBMS background with Oracle/MySQL Comfortable with microServices, CI/CD, Dockers, and Kubernetes Strong experience in common Data Vault data warehouse modelling principles. Creating/modifying Dockers and deploying them via Kubernetes. Additional Skills Required: The ideal candidate should have at least 14+ years of experience in IT along in addition to the following: Having 10+ years of extensive development experience using snowflake or similar data warehouse technology Having working experience with dbt and other technologies of the modern datastack, such as Snowflake, Azure, Databricks and Python, Experience in agile processes, such as SCRUM Extensive experience in writing advanced SQL statements and performance tuning. Experience in Data Ingestion techniques using custom or SAAS tool Experience in data modelling and can optimize existing/new data models Experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets Technical Qualifications: Preferred: Bachelor's degree in Computer Science, Information Systems, or a related field. Experience in high-tech, software, or telecom industries is a plus. Strong analytical skills to translate insights into impactful product initiatives. #DataEngineering Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. Job ID R-63188 Date posted 06/05/2025 Benefits Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits. Paid Time Off Tuition Assistance Insurance Options Discounts Training & Development

Posted 1 week ago

Apply

9.0 years

6 - 9 Lacs

Hyderābād

On-site

Lead, Software Engineering Hyderabad, India Information Technology 313260 Job Description About The Role: Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, it’s the essential code you’ll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. What’s in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 9-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages - Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years’ experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years’ experience with Elasticsearch, SQL, NoSQL, Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 313260 Posted On: 2025-04-28 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

6.0 years

7 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Implementing end to end features and user stories from analysis/design, building of feature, validation and deployment and post deployment support Implement the core and complex user stories and components of the platform Implement required POCs to make sure that suggested design/technologies meet the requirements Identify and create re-usable components Create Scala/Spark jobs for data transformation and aggregation, Implement user stories in data processing pipelines in AWS/Azure Produce unit tests for Spark transformations and helper methods Perform code review and provide meaningful feedback to improve code quality Possess/acquire solid troubleshooting skills and be interested in performing troubleshooting of issues in different desperate technologies and environments Identify and integrate well over all integration points in context of a project as well as other applications in the environment Give solution to any issue that is raised during code review and be able to justify the decision taken Help teams in complex and unusual bugs and troubleshooting scenarios Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's degree in Computer Science, Information Technology, or equivalent experience 6+ years of working experience with Scala/Python Spark, Hadoop MapR, Kafka, Scala, Kubernetes, CI/CD Experience in Java is added advantage Understand all non-functional requirements and be able to address them in design and code Understand and relate technology integration scenarios and be able to apply these learnings in complex troubleshooting scenarios Understand CI/CD pipelines Proficient in programming languages like Scala, Python, Java and used IDE like eclipse, IntelliJ Understanding agile methodology Proven proactive and self-motivated, spot improvement opportunities within and outside of project and present Proven solid written and verbal communication skills including explaining complex concepts effectively to technical and non-technical audience At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation – e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3 rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Show more Show less

Posted 1 week ago

Apply

7.0 years

4 - 6 Lacs

Hyderābād

On-site

ML Data Engineer Hyderabad, India; Bangalore, India; Noida, India Information Technology 314321 Job Description About The Role: Grade Level (for internal use): 10 Responsibilities: To work closely with various stakeholders to collect, clean, model and visualise datasets. To create data driven insights by researching, designing and implementing ML models to deliver insights and implement action-oriented solutions to complex business problems To drive ground-breaking ML technology within the Modelling and Data Science team. To extract hidden value insights and enrich accuracy of the datasets. To leverage technology and automate workflows creating modernized operational processes aligning with the team strategy. To understand, implement, manage, and maintain analytical solutions & techniques independently. To collaborate and coordinate with Data, content and modelling teams and provide analytical assistance of various commodity datasets To drive and maintain high quality processes and delivering projects in collaborative Agile team environments. Requirements: 7+ years of programming experience particularly in Python 4+ years of experience working with SQL or NoSQL databases. 1+ years of experience working with Pyspark. University degree in Computer Science, Engineering, Mathematics, or related disciplines. Strong understanding of big data technologies such as Hadoop, Spark, or Kafka. Demonstrated ability to design and implement end-to-end scalable and performant data pipelines. Experience with workflow management platforms like Airflow. Strong analytical and problem-solving skills. Ability to collaborate and communicate effectively with both technical and non-technical stakeholders. • Experience building solutions and working in the Agile working environment Experience working with git or other source control tools Strong understanding of Object-Oriented Programming (OOP) principles and design patterns. Knowledge of clean code practices and the ability to write well-documented, modular, and reusable code. Strong focus on performance optimization and writing efficient, scalable code. Nice to have: Experience working with Oil, gas and energy markets Experience working with BI Visualization applications (e.g. Tableau, Power BI) Understanding of cloud-based services, preferably AWS Experience working with Unified analytics platforms like Databricks Experience with deep learning and related toolkits: Tensorflow, PyTorch, Keras, etc. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 314321 Posted On: 2025-05-19 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Primary skills:Technology->Big Data - Data Processing->Spark Spark Expertise Expert proficiency in Spark Ability to design and implement efficient data processing workflows Experience with Spark SQL and DataFrames Good exposure to Big Data architectures and good understanding of Big Data eco system Experience with some framework building experience on Hadoop Good with DB knowledge with SQL tuning experience. Good to have experience with Python, APIs and exposure to Kafka. Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Show more Show less

Posted 1 week ago

Apply

0 years

3 - 7 Lacs

Hyderābād

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure; solve cloud platform tests. Work with customers to design and implement Cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

3.0 years

3 - 8 Lacs

Gurgaon

On-site

- 3+ years of building machine learning models for business application experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning Amazon Shipping Team: Basic Qualifications: Btech/Mtech in Computer Science, Machine Learning, Operations Research, Statistics, or related technical field applying ML techniques to solve complex business problems programming skills in Python, R, or similar languages Experience with modern ML frameworks (PyTorch, TensorFlow, etc.) About the Role: At Amazon Shipping, we're revolutionizing package delivery through machine learning. Our network handles packages daily with predictive monitoring, proactive failure detection, and intelligent redundancy - all while optimizing costs for our customers. Key Responsibilities: Design and develop ML models for: Transportation cost auditing and discrepancy detection Package-level shipping cost prediction First Mile optimization through warehouse pickup forecasting Delivery delay prediction using network signals and external factors Collaborate with cross-functional teams to implement ML solutions at scale Author scientific papers for ML conferences Mentor team members in ML best practices Provide ML consultation across organizations Preferred Qualifications: PhD in Experience with large-scale distributed systems Publication record in top-tier ML conferences Expertise in time series forecasting and anomaly detection Background in transportation/logistics optimization communication skills with technical and non-technical stakeholders Our Team: You'll join a diverse team of Applied Scientists, Software Engineers, and Business Intelligence Engineers working on edge ML solutions. We're passionate about solving complex problems and delivering customer value through innovation. Key job responsibilities Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 week ago

Apply

50.0 years

0 Lacs

Gurgaon

On-site

About the Opportunity Job Type: Permanent Application Deadline: 20 June 2025 Job Description Title Data Scientist, Risk Data Analytics Department Data Value Location Gurgaon Reports To Associate Director, Risk Data Analytics Level 5 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our Data Value team and feel like you’re part of something bigger. About Fidelity International Fidelity International offers investment solutions and services and retirement expertise to more than 2.5 million customers worldwide. As a privately held, purpose-driven company with a 50-year heritage, we think generationally and invest for the long term. Operating in more than 25 locations and with $893.2 billion in total assets, our clients range from central banks, sovereign wealth funds, large corporates, financial institutions, insurers and wealth managers, to private individuals. Our Global Platform Solutions business provides individuals, advisers and employers with access to world-class investment choices, third-party solutions, administration services and pension guidance. Together with our Investment Solutions & Services business, we invest $437 billion on behalf of our clients. By combining our asset management expertise with our solutions for workplace and personal investing, we work together to build better financial futures for millions of people around the world. Our clients come from all walks of life and so do we. We are proud of our inclusive culture and encourage applications from the widest mix of talent, whatever your age, gender, ethnicity, sexual orientation, gender identity, social background and more. We are a disability-friendly company and would welcome a conversation with you if you feel you might benefit from any reasonable adjustments to perform to the best of your ability during the recruitment process and beyond. We are committed to being a truly flexible employer, encouraging and trusting our people to perform their role in the way that works best for them, our business, our colleagues and our clients. We offer the maximum possible flexibility over where and when you work for all, considering your role and any local regulations. We call this new approach “dynamic working”. Find out more about what we do, our history, our new approach of “dynamic working“ and how you could be a part of our future at About us | Careers . About Global Risk The Global Risk team in Fidelity covers the management oversight of Fidelity’s risk profile, including key risk frameworks, policies and procedures and oversight and challenge processes. The team partner with the businesses to ensure Fidelity manages its risk profile within defined risk appetite. The team comprises risk specialists covering all facets of risk management, including investment, financial, non-financial and strategic risk. As part of a broader General Counsel team, the Risk team collaborates closely with Compliance, Legal, Tax and Corporate Sustainability colleagues. About Risk Data Analytics Hub The vision of Risk Data Analytics Hub is to establish a data-centric risk function that is forward-thinking, resilient, and proactive. The hub's mission is to enhance risk management processes and unlock innovative opportunities in the ever-changing risk and business landscape. The Hub has made significant strides in the Investment Risk, delivering prominent contributions such as the Fund Performance Monitoring, Fund Aggregate Exposures, Fund Market Risk, Fund Liquidity Risk, and other comprehensive monitoring and reporting dashboards. These tools have been crucial in supporting risk oversight and regulatory submissions. The Hub's goal is to scale this capability across global risk, using data-driven insights to uncover hidden patterns and predict emerging risks. This will enable decision-makers to prioritise actions that align with business objectives. The approach is to dismantle silos and foster collaboration across the global risk team, introducing new tools, techniques, and innovation themes to enhance agility. About your role You will be expected to take a leading role in developing the Data Science and Advanced Analytics solutions for our business. This will involve: Engaging with the key stakeholders to understand various subject areas in Global Risk Team including Investment Risk, Non-Financial Risk, Enterprise Risk, Model Risk, Enterprise Resilience etc. Implement advanced analytics solutions on On-Premises/Cloud platforms, develop proof-of-concepts and engage with internal and external ecosystem to progress the proof of concepts to production. Engaging and collaborating with different other internal teams like Data Lake, Data Engineering, DevOps/MLOps, Technology team etc for development of new tools, capabilities, and solutions. Maximize adoption of Cloud Based advanced analytics solutions: Build out sandbox analytics environments using Snowflake, AWS, Adobe, Salesforce. Support delivered models and infrastructure on AWS including data changes and model tuning About you Key Responsibilities Developing and Delivering Data Science solutions for business (40%) Partner with internal (FIL teams) & external ecosystem to design and deliver advanced analytics enabled Data Science solutions. Create advanced analytics solution on quantitative and text data using Artificial Intelligence, Machine Learning and NLP techniques. Create compelling visualisations that enable the smooth consumption of predictions and insights for customer benefit. .Stakeholder Management (30%) Works with Risk SMEs/Managers, stakeholders and sponsors to understand the business problem and translate it into appropriate analytics solution. Engages with key stakeholders for smooth execution, delivery, implementation and maintenance of solutions. Adoption of Cloud enabled Data Science solutions: (20%) Maximize Adoption of Cloud Based advanced analytics solution Build out sandbox analytics environments using Snowflake, AWS, Adobe, Salesforce Deploy solutions in productions while adhering to best practices involving Model Explainability, MLOps, Feature Stores, Model Management, Responsible AI etc Collaboration and Ownership (10%) Sharing of knowledge, best practices with the team including coaching or training in some of deep learning/ machine learning methodologies. Provides mentoring, coaching, and consulting advice and guidance to staff, e.g. analytic methodologies, data recommendations. Takes complete independent ownership of the projects and the initiatives in the team with the minimal support. Experience and Qualifications Required Qualifications: Engineer from IIT/Master’s in field related to Data Science/Economics/Mathematics (Tie1 Institutions like ISI, Delhi School of Economics)/M.B.A from tier 1 institutions Must have Skills & Experience Required: Overall, 9+ years of experience in Data Science and Analytics 5+ years of hands-on experience in - Statistical Modelling /Machine Learning Techniques/Natural Language Processing/Deep Learning 5+ years of experience in Python/Machine Learning/Deep Learning Excellent problem-solving skills Should be able to run analytics applications such as Python, SAS and interpret statistical results Implementation of models with clear measurable outcomes Good to have Skills & Experience Required: Ability to engage in discussion with senior stakeholders on defining business problems, designing analyses projects, and articulating analytical insights to stakeholders. Experience on SPARK/Hadoop/Big Data Platforms is a plus. Experience with unstructured data and big data. Experience with secondary data and knowledge of primary market research is a plus. Ability to independently own and manage the projects with minimal support. Excellent analytical skills and a strong sense for structure and logic. Ability to develop, test and validate hypotheses. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 1 week ago

Apply

40.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Who We Are Escalent is an award-winning data analytics and advisory firm that helps clients understand human and market behaviors to navigate disruption. As catalysts of progress for more than 40 years, our strategies guide the world’s leading brands. We accelerate growth by creating a seamless flow between primary, secondary, syndicated, and internal business data, providing consulting and advisory services from insights through implementation. Based on a profound understanding of what drives human beings and markets, we identify actions that build brands, enhance customer experiences, inspire product innovation and boost business productivity. We listen, learn, question, discover, innovate, and deliver—for each other and our clients—to make the world work better for people. Why Escalent? Once you join our team you will have the opportunity to... Access experts across industries for maximum learning opportunities including Weekly Knowledge Sharing Sessions, LinkedIn Learning, and more. Gain exposure to a rich variety of research techniques from knowledgeable professionals. Enjoy a remote first/hybrid work environment with a flexible schedule. Obtain insights into the needs and challenges of your clients—to learn how the world’s leading brands use research. Experience peace of mind working for a company with a commitment to conducting research ethically. Build lasting relationships with fun colleagues in a culture that values each person. Role Overview: We are looking for a Data Engineer to design, build, and optimize scalable data pipelines and infrastructure that power analytics, machine learning, and business intelligence. You will work closely with data scientists, analysts, and software engineers to ensure efficient data ingestion, transformation, and management. Roles & Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to extract, transform, and load data from diverse sources Build and optimize data storage solutions using SQL and NoSQL databases, data lakes, and cloud warehouses (Snowflake, BigQuery, Redshift) Ensure data quality, integrity, and security through automated validation, governance, and monitoring frameworks Collaborate with data scientists and analysts to provide clean, structured, and accessible data for reporting and AI/ML models Implement best practices for performance tuning, indexing, and query optimization to handle large-scale datasets Stay updated with emerging data engineering technologies, architectures, and industry best practices Write clean and structured code as defined in the team’s coding standards and creating documentation for best practices Stay updated with emerging technologies, frameworks, and industry trends Required Skills: Strong proficiency in Python, SQL, and data processing frameworks (Pandas, Spark, Hadoop) Experience with cloud-based data platforms (AWS, Azure, GCP) and services like S3, Glue, Athena, Data Factory, or BigQuery Solid understanding of database design, data modeling and warehouse architectures Hands-on experience with ETL/ELT pipelines and workflow orchestration tools (Apache Airflow, Prefect, Luigi) Knowledge of APIs, RESTful services and integrating multiple data sources Strong problem-solving and debugging skills in handling large-scale data processing challenges Experience with version control systems (Git, GitHub, GitLab) Ability to work in a team setting Organizational and time management skills Desirable skills: Experience working with Agile development methodologies Experience in building self-service data platforms for business users and analysts Effective skills in written and verbal communication Show more Show less

Posted 1 week ago

Apply

4.0 years

6 - 10 Lacs

Bengaluru

On-site

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn’t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Position Overview: Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. Learn more about Target here. As a lead engineer, you serve as the technical anchor for the engineering team that supports a product. You create, own and are responsible for the application architecture that best serves the product in its functional and non-functional needs. You identify and drive architectural changes to accelerate feature development or improve the quality of service (or both).You have deep and broad engineering skills and are capable of standing up an architecture in its whole on your own, but you choose to influence a wider team by acting as a “force multiplier”. Team Overview: IT Data Platform (ITDP) is the powerhouse data platform driving Target's tech efficiencies, seamlessly integrating operational and analytical needs. It fuels every facet of Target Tech, from boosting developer productivity and enhancing system intelligence to ensuring top-notch security and compliance. Target Tech builds the technology that makes Target the easiest, safest and most joyful place to shop and work. From digital to supply chain to cybersecurity, develop innovations that power the future of retail while relying on best-in-class data science algorithms that drive value. Target Tech is at the forefront of the industry, revolutionizing technology efficiency with cutting-edge data and AI. ITDP meticulously tracks tech data points across stores, multi-cloud environments, data centers, and distribution centers. IT Data Platform leverages advanced AI algorithms to analyze vast datasets, providing actionable insights that drive strategic decision-making. By integrating Generative AI, it enhances predictive analytics, enabling proactive solutions and optimizing operational efficiencies. Basic Qualifications: 4 years degree or equivalent experience 8+ years of industry experience in software design, development, and algorithm related solutions. 8+ years of experience in programming languages such as Java, Python, Scala. Hands on experience developing distributed systems, large scale systems, database and/or backend APIs. Demonstrates expertise in analysis and optimization of systems capacity, performance, and operational health Stays current with new and evolving technologies via formal training and self-directed education. Preferred Qualifications: • Experience Big Data tools and Hadoop Ecosystems. Like Apache Spark, Apache Iceberg, Kafka, ORC, MapReduce, Yarn, Hive, HDFS etc. Experience in architecting, building and running a large-scale system. Experience with industry, open-source projects and/or databases and/or large-data distributed systems. Key Responsibilities: Data Platform Management: Lead the design, implementation, and optimization of the Data Platform ensuring scalability and data correctness. Development: Oversee the development and maintenance of all core components of the platform. Unified APIs: Manage and create highly scalable APIs with GraphQL at enterprise scale. Platform Monitoring and Observability: Ensure monitoring solutions and security tools to ensure the integrity and trust in Data and APIs. Leadership and Mentorship: Provide technical leadership and mentorship to engineering teams, fostering a culture of collaboration and continuous improvement. Technology Design and Architecture: Articulate technology designs and architectural decisions to team members, ensuring alignment with business goals and technical standards. Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture: https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Bengaluru

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job purpose: Need to work as a Technology or Functional Consultant in FinCrime solutions modernisation and transformation projects Should exhibit understanding of financial services during the client discussions and be able to articulate the client requirements into tech specs Contribute as team player in a team of consultants to be able to deliver large technology programs Work Experience Requirements Understand high-level business requirements and relate them to appropriate AML / FinCrime product capabilities Define and validate customisation needs for AML products as per client requirements. Review client processes and workflows and make recommendations to the client to maximise benefits from the AML Product. Show in-depth knowledge on best banking practices and AML product modules Prior experience in one of more COTS such as NetReveal , Norkom, Actimize, SAS AML VI/VIA, fircosoft or Quantexa Your client responsibilities: Need to work as a Technical Business Systems Analyst in one or more FinCrime projects. Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Regular status reporting to the Manager and onsite coordinators Interface with the customer representatives as and when needed Willing to travel to the customers locations on need basis Mandatory skills: Technical: Expert in the following NetReveal modules: Scenario Manager Configuration, Application Builder, Base Platform, Workflow Configurator, Services Manager, Batch bridge, Scheduling Configuration, Command and Control, AML module, Expert in Velocity template. NetReveal Optimization module, Multi-entity and mutli-currency platform, Cloud platform, REST API development using Java. CI/CD technologies (BitBucket, Jenkins, Nexus, Serena). Container Technologies such as Docker, Kubernetes. NetReveal v7.4 or above, Proficient in Oracle SQL, PL/SQL, Websphere Application Server Experience in Agile Methodology SQL and Understanding of Bigdata tech such as Spark, Hadoop, or Elasticsearch Scripting/ Programming: At least one programming/scripting language amongst Python, Java or Unix Shell Script Experience in product migration, implementation - preferably been part of at least 1 AML implementations Act as the Subject Matter Expert (SME) and possess an excellent functional/operational knowledge of the activities performed by the various teams Should Posses high-level understanding of infrastructure designs, data model and application/business architecture. Functional : Thorough knowledge of the AML/CTF transactions monitoring, KYC, Sanctions process Thorough knowledge on Transaction monitoring and scenarios Should have developed one or more modules worked on KYC - know your customer, CDD- customer due diligence, EDD - enhanced due diligence, sanction screening, PE - politically exposed person, adverse media screening, TM- transaction monitoring, CM- Case Management. Thorough knowledge of case management workflows Preferred Work Location: This position offers flexibility to work from any EY GDS office in India Education and experience – Mandatory MBA/ MCA/ BE/ BTech or equivalent with banking industry experience of 3 to 4 years EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 week ago

Apply

9.0 years

0 Lacs

Bengaluru

On-site

JOB DESCRIPTION About KPMG in India KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada. KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment. Equal employment opportunity information KPMG India has a policy of providing equal opportunity for all applicants and employees regardless of their color, caste, religion, age, sex/gender, national origin, citizenship, sexual orientation, gender identity or expression, disability or other legally protected status. KPMG India values diversity and we request you to submit the details below to support us in our endeavor for diversity. Providing the below information is voluntary and refusal to submit such information will not be prejudicial to you. Major Duties & Responsibilities • Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business questions • Create Proof of concepts (POCs) / Minimum Viable Products (MVPs), then guide them through to production deployment and operationalization of projects • Influence machine learning strategy for Digital programs and projects • Make solution recommendations that appropriately balance speed to market and analytical soundness • Explore design options to assess efficiency and impact, develop approaches to improve robustness and rigor • Develop analytical / modelling solutions using a variety of commercial and open-source tools (e.g., Python, R, TensorFlow) • Formulate model-based solutions by combining machine learning algorithms with other techniques such as simulations. • Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations, scenarios, and stories. • Create algorithms to extract information from large, multiparametric data sets. • Deploy algorithms to production to identify actionable insights from large databases. • Compare results from various methodologies and recommend optimal techniques. • Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations, scenarios, and stories. • Develop and embed automated processes for predictive model validation, deployment, and implementation • Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science • Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate reusability, and reliability upon deployment • Lead discussions at peer review and use interpersonal skills to positively influence decision making • Provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts; make impactful contributions to internal discussions on emerging practices • Facilitate cross-geography sharing of new ideas, learnings, and best-practices Required Qualifications • Bachelor of Science or Bachelor of Engineering at a minimum. • 9+ years of work experience as a Data Scientist • A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to quickly cycle hypothesis through the discovery phase of a project • Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL, Hadoop/Hive, Scala) • Good hands-on skills in both feature engineering and hyperparameter optimization • Experience producing high-quality code, tests, documentation • Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, Azure ML, Synapse, Databricks • Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies • Proficiency in statistical concepts and ML algorithms • Good knowledge of Agile principles and process • Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team • Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and results • Self-motivated and a proactive problem solver who can work independently and in teams QUALIFICATIONS B.Tech/M.Tech/MCA/M.Sc

Posted 1 week ago

Apply

5.0 years

3 Lacs

Bengaluru

On-site

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 5 years Location: Bengaluru, India, Karnataka JobType: full-time We are seeking a Big Data Engineer with deep technical expertise to join our fast-paced, data-driven team. In this role, you will be responsible for designing and building robust, scalable, and high-performance data pipelines that fuel real-time analytics, business intelligence, and machine learning applications across the organization. If you thrive on working with large datasets, cutting-edge technologies, and solving complex data engineering challenges, this is the opportunity for you. What You’ll Do Design & Build Pipelines : Develop efficient, reliable, and scalable data pipelines that process large volumes of structured and unstructured data using big data tools. Distributed Data Processing : Leverage the Hadoop ecosystem (HDFS, Hive, MapReduce) to manage and transform massive datasets. Starburst (Trino) Integration : Design and optimize federated queries using Starburst, enabling seamless access across diverse data platforms. Databricks Lakehouse Development : Utilize Spark, Delta Lake, and MLflow on the Databricks Lakehouse Platform to enable unified analytics and AI workloads. Data Modeling & Architecture : Work with stakeholders to translate business requirements into flexible, scalable data models and architecture. Performance & Optimization : Monitor, troubleshoot, and fine-tune pipeline performance to ensure efficiency, reliability, and data integrity. Security & Compliance : Implement and enforce best practices for data privacy, security, and compliance with global regulations like GDPR and CCPA. Collaboration : Partner with data scientists, product teams, and business users to deliver impactful data solutions and improve decision-making. What You Bring Must-Have Skills 5+ years of hands-on experience in big data engineering, data platform development, or similar roles. Strong experience with Hadoop , including HDFS, Hive, HBase, and MapReduce. Deep understanding and practical use of Starburst (Trino) or Presto for large-scale querying. Hands-on experience with Databricks Lakehouse Platform , Spark, and Delta Lake. Proficient in SQL and programming languages like Python or Scala . Strong knowledge of data warehousing, ETL/ELT workflows, and schema design. Familiarity with CI/CD tools, version control (Git), and workflow orchestration tools (Airflow or similar). Nice-to-Have Skills Experience with cloud environments such as AWS , Azure , or GCP . Exposure to Docker , Kubernetes , or infrastructure-as-code tools. Understanding of data governance and metadata management platforms. Experience supporting AI/ML initiatives with curated datasets and pipelines.

Posted 1 week ago

Apply

8.0 years

6 - 9 Lacs

Bengaluru

Remote

JOB DESCRIPTION Join the Consumer & Community Banking division at Chase, a leading U.S. financial services firm as a skilled data professional in our Data & Analytics tea Job Summary: As an Analytical Solutions Manager within the Consumer and Community Banking (CCB) Finance Data & Insights Team, you will be a part of an agile product team that is responsible for the development, production, and transformation of financial data and reporting across the Consumer and Community Banking division. Your ability and passion to think beyond raw and disparate data will enable you to create data visualizations and intelligence solutions that will be utilized by the organization's top leaders to achieve key strategic imperatives. You will assist in identifying and assessing opportunities to eliminate manual processes and utilize automation tools such as Alteryx or Thought Spot to implement automated solutions. You will be responsible for extracting, analyzing, and summarizing data for ad hoc stakeholder requests, and play a significant role in transforming our data environment to a modernized cloud platform. Job responsibilities: Transform raw data into actionable insights, demonstrating a history of learning and implementing new technologies. Lead the Finance Data & Insights Team, an agile product team, taking responsibility for the development, production, and transformation of financial data and reporting across CCB. Improve the lives of our people and increase value to the firm by leveraging the power of data and the best tools to analyze data, generate insights, save time, improve processes & control, and lead the organization in developing skills of the future. Join an agile product team as an Analytical Solutions Manager on the CCB Finance Data & Insights Team, responsible for the development, production, and transformation of financial data and reporting across CCB. Lead conversations with business teams and create data visualizations and intelligence solutions utilized by the organization's top leaders to reach key strategic imperatives. Identify and assess opportunities to eliminate manual processes and utilize automation tools such as Alteryx or Thought Spot to bring automated solutions to life. Extract, analyze, and summarize data for ad hoc stakeholder requests, playing a role in transforming the data environment to a modernized cloud platform. Required qualifications, capabilities and skills: Minimum 8 years of experience in SQL is a MUST Minimum 8 years of experience developing data visualization and presentations with Thought Spot experience Experience with data wrangling tools such as Alteryx Experience with relational databases utilizing SQL to pull and summarize large datasets, report creation and ad-hoc analyses Knowledge of modern MPP databases and big-data (Hadoop) concepts Experience in reporting development and testing, and ability to interpret unstructured data and draw objective inferences given known limitations of the data Demonstrated ability to think beyond raw data and to understand the underlying business context and sense business opportunities hidden in data Strong written and oral communication skills; ability to communicate effectively with all levels of management and partners from a variety of business function Experience with Hive, Spark SQL, Impala or other big-data query tools AWS, Databricks, Snowflake, or other Cloud Data Warehouse experience Highly motivated, self-directed, curious to learn new technologies For this particular role, we are unable to sponsor any type of work visa including but not limited to H1B, H4 – EAD, OPT, TN, or L visas. This role does not have a relocation allowance tied to it, so all candidates must be local to Jersey City, NJ or Columbus, OH office or willing to relocate on their own immediately upon hiring. Candidates must be able to physically work in our Jersey City, NJ or Columbus, OH office 3 days a week and remotely from home 2 days per week. The specific schedule will be determined by direct management. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru

On-site

JOB DESCRIPTION Your unmatched expertise and unrelenting quest for outcomes are the driving forces for transformation that inspire high-quality solutions. You are a crucial member of a diverse team of thought leaders committed to leaving a positive mark on the industry. As a Principal Data Architect at JPMorgan Chase within the Consumer and Community Banking, you will provide expertise to enhance and develop data architecture platforms based on modern cloud-based technologies as well as support the adoption of strategic global solutions. You will leverage your advanced data architecture capabilities and collaborate with colleagues across the organization to drive best-in-class outcomes to achieve the target state architecture goals. Job responsibilities Advises cross-functional teams on data architecture solutions to achieve the target state architecture and improve current technologies. Lead the design, implementation, and maintenance of scalable, high-performance data architectures, including data lakes, data warehouses, and data integration solutions. Collaborate with cross-functional teams, including IT, business units, and analytics teams, to ensure data architecture supports business needs and enables data-driven decision-making. Drive the adoption of emerging data technologies and methodologies to enhance data capabilities and improve efficiency Develops multi-year roadmaps aligned with business and data architecture strategy and priorities Creates complex and scalable data frameworks using appropriate software design Reviews and debugs code written by others to deliver secure, high-quality production code Serves as the function’s go-to subject matter expert Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies Creates durable, reusable data frameworks using new technology to meet the needs of the business Champions the firm’s culture of diversity, equity, inclusion, and respect. Mentors and coaches junior architects and technologists Required qualifications, capabilities, and skills Formal training or certification on data management concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise. Hands-on practical experience delivering system design, application development, testing, and operational stability Expert in one or more architecture disciplines and programming languages Deep knowledge of data architecture, best practices, and industry trends Experience with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark). Advanced knowledge of application development and technical processes with considerable in-depth knowledge in one or more technical disciplines (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Experience applying expertise and new methods to determine solutions for complex architecture problems in one or more technical disciplines Ability to present and effectively communicate with Senior Leaders and Executives ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction.

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 10 Lacs

Bengaluru

On-site

Sr Data Analyst About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn’t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. As a Sr Data Analyst for Target’s Merch Data Analytics team you’ll: Support our world-class Merchandising leadership team at Target with critical data analysis that helps Merch business team to make profitable decisions. Enable faster, smarter and more scalable decision-making to compete and win the modern retail market. Collaborate with stakeholders and understand their priorities/roadmap to drive business strategies using data. Interface with Target business representatives to validate business requirements/requests for analysis and present final analytical results. Designs, develops, and delivers analytical solutions resulting in decision support or models Gathers required data and performs data analysis to support needs. Communicate impact of proposed solutions to business partners Evaluates processes, analyzes and interprets statistical data Develop business acumen and cultivate client relationships Presents results in a manner that the business partners can understand. Translate scientific methodology to business terms. Documents analytical methodologies used in the execution of analytical projects Participate in knowledge sharing system to support iterative model builds Adheres to corporate information protection standards. Keep up to date on industry trends, best practices, and emerging methodologies Requirements / About You: Experience: Overall 5-8 years exp and relevant 3-5 years exp Qualification: B.Tech / B.E. or Masters in Statistics /Econometrics/Mathematics equivalent 1. Extensive exposure to Structured Query Language (SQL), SQL Optimization and DW/BI concepts. 2. Proven hands-on experience in BI Visualization tool (i.e. Tableau, Domo, MSTR10, Qlik) with ability to learn additional vendor and proprietary visualizations tools. 3. Strong knowledge of structured (i.e. Teradata, Oracle, Hive) and unstructured databases including Hadoop Distributed File System (HDFS). Exposure and extensive hands-on work with large data sets. 4. Hands on experience in R, Python, Hive or other open-source languages/database 5. Hands on experience in any advanced analytical techniques like Regression, Time-series models, Classification Techniques, etc. and conceptual understanding of all the techniques mentioned above 6. Git source code management & experience working in an agile environment. 7. Strong attention to detail, excellent diagnostic, and problem-solving skills 8. Highly self-motivated with a strong sense of urgency to be able to work both independently and in team settings in a fast-paced environment; capability to manage urgency timelines 9. Competent and curious to ask questions and learn to fill gaps, desire to teach and learn. 10. Excellent communication, service orientation and strong relationship building skills 11. Experience with Retail, Merchandising, Marketing will be strong addons Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies