Jobs
Interviews

3344 Big Data Jobs - Page 49

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 17.0 years

30 - 40 Lacs

Bengaluru

Work from Office

The Opportunity Architect, design and develop a multi-cloud, hybrid cloud data product using distributed computing technology that spans multiple availability zones. Develop solutions that are cloud-first and that provide cloud and app mobility advantages to customers and help them run production workloads upbeat no matter which cloud they choose. The Nutanix Move team is looking to hire a strong developer/Technical Lead in the area of Virtualization, Orchestration and systems programming. About the Team The Nutanix Files team, a group dedicated to enhancing storage services with a software-defined scale-out file storage solution, is a geographically diverse group spanning across India, Serbia, and the US. Our team culture emphasizes collaboration, empowerment, and results-driven focus, creating an environment where every member is valued, supported, and encouraged to take initiative and ownership. Our team prioritizes continuous learning, challenging problem-solving, and open communication to foster unity and success within the group. You will report to the Director of Engineering, a seasoned professional with over 25 years of industry experience at Nutanix. With a passion for fostering growth and empowerment. Our team follows a hybrid work model where members typically spend 23 days a week in the office to collaborate, build relationships, and solve problems efficiently. This setup promotes knowledge sharing, continuous learning, and innovation within the team, enhancing our collective expertise and problem-solving capabilities. Your Role Develop solutions to enable seamless mobility of applications/files and other \ Artifacts across different hypervisors, environments and public cloud solutions such as AWS, Azure, Google Cloud Translate product requirements to detailed architecture and design.Work on performance, scaling out and resiliency. Work closely with development, test, documentation and product management teams to deliver high-quality products in a fast-paced environment. Engage with customers and support when needed to solve issues What You Will Bring 712 years of experience as a system (virtualization/ backend) developer Love of programming and rock-solid in one or more languages: Go, C, Development experience in a couple of the following areas: Linux operating systems, Database back-ends, Cloud technologies Experience with Virtualization, Dockers/containers , Storage domain Experience working with virtualization technologies like VMware, KVM, Hyper-V. A Bachelor's degree in Computer Science or a related field is required. Work Arrangement Hybrid: This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager.

Posted 1 month ago

Apply

6.0 - 10.0 years

14 - 18 Lacs

Hyderabad

Hybrid

Location: Hyderabad ( Hybrid ) Please share your resume with +91 9361912009 / +91 6382321843 Job Description : AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar) Required Skills: Pyspark Scala / Java AWS Jenkins"

Posted 1 month ago

Apply

10.0 - 17.0 years

30 - 40 Lacs

Pune

Hybrid

The Opportunity Join our team at Nutanix as a Staff Engineer in our Pune office, where you will play a crucial role in providing a solid Nutanix Cloud Manager (NCM) product to our customers. Your mission will be to design and build an observability platform that allows our customers to proactively monitor and manage their infrastructure and applications. By building a reliable NCM data platform and driving the adoption of best practices, you will contribute to the success of our products. This is a unique opportunity to work with cutting-edge technologies, lead and mentor engineers, and be part of a fast-paced environment where autonomy and ownership are valued. The work will range from analyzing application requirements and proposing and benchmarking databases, designing data processing pipelines to data analysis and recommending best practices based on workload. About the Team At Nutanix, you'll be joining the Insights team, a dynamic and innovative group dedicated to leveraging data analytics for impactful business decision-making. With team members located in both the US and India, we foster a collaborative environment that encourages sharing diverse perspectives and ideas. Our culture is rooted in creativity and continuous improvement, allowing us to drive meaningful change and deliver exceptional results. You will report to the Director of Engineering, who values open communication and mentorship, ensuring that each team member has the support and guidance necessary to excel in their roles. Our work setup is hybrid, requiring team members to be in the office 23 days a week, which balances flexibility with the benefits of in-person collaboration. Your Role Drive technical direction and architecture for the NCM Data Platform working with other senior engineers. Design and develop the next generation of Data Platform features for on-prem and cloud. Drive data modeling discussions working with senior engineers to ensure the best solution for requirements from various service teams. Collaborate with Product Management, QA and documentation teams across multiple geographies to deliver high-quality products and services. Work across all components of a big data platform such as API gateway, message bus, databases and database abstraction layers. Mentor junior engineers and drive best practices for design/code reviews. Propose and drive adoption of best practices for operationalizing data platforms, including data catalog, tracing capabilities, configuration-driven change, etc. What You Will Bring 15+ years of experience with a Bachelors or Masters degree in computer science or related streams. Experience architecting and building highly available and scalable data platforms. Experience with one or more of the following languages (Python, Golang, C++, Java). Experience working with Database technologies and optimizing workloads. Experience working with Big Data technologies viz. Message queues/API gateways, Kubernetes ecosystem, and microservice patterns. Having an owner's mindset and prior experience of leading teams in a fast-paced and demanding environment with good knowledge of SDLC practices. Awareness of Data Marts, Data Lakes, and Data Warehouses is a plus. Experience working with one or more of the cloud platforms (AWS, Azure, GCP, etc) is a plus.

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Work from Office

About the Team We are looking for a passionate AI/ML Engineer with a strong background in Generative AI, NLP, and large-scale data systems. As an individual contributor (IC3/IC4), youll work at the intersection of research and productizationbuilding intelligent, scalable, and secure AI systems that power next-gen enterprise experiences across Nutanix platforms. This is your opportunity to design and deliver cutting-edge AI capabilities that simplify cloud infrastructure, elevate data insights, and transform user interactions. Your Role such as intelligent assistants, autonomous infrastructure, anomaly detection, recommendation systems, and more. Contribute to and/or lead initiatives to fine-tune and optimize LLMs (e.g., LLaMA, Mistral, GPT) using Nutanix-specific datasets. Work on RAG pipelines, embeddings, vector DBs, prompt engineering, and responsible AI guardrails. Collaborate with product managers, designers, and platform engineers to integrate AI features seamlessly into Nutanix products. Own the end-to-end ML lifecycle: from data engineering, experimentation, training, and evaluation to deployment and monitoring. Build reusable ML components, services, and APIs with a focus on scalability, observability, and reliability. Contribute to internal AI platforms/tools and best practices, including model evaluation frameworks and GPU/accelerator optimization. Stay on top of emerging AI/ML research and tools and help drive innovation across the org. What You Will Bring Basic Requirements (IC3/IC4) Bachelor's or Masters in Computer Science, Machine Learning, or related field. 36 years (IC3) or 69 years (IC4) of industry experience in AI/ML with at least 12 years in Generative AI and LLMs. Strong programming skills in Python, familiarity with ML/AI libraries (PyTorch, TensorFlow, Hugging Face, LangChain, etc.). Experience with fine-tuning LLMs, prompt engineering, embeddings, and working with vector databases (e.g., FAISS, Pinecone, Weaviate). Hands-on experience in ML Ops: data versioning, model serving, and monitoring. Solid understanding of NLP techniques, Transformers, and encoder-decoder architectures. Familiarity with cloud-native architectures, Kubernetes, and scalable inference platforms (e.g., Triton, vLLM). Preferred Qualifications Experience with open-source LLMs (e.g., LLaMA3, Mistral, Mixtral). Familiarity with retrieval-augmented generation (RAG) architectures and open-source toolkits like Haystack or LangChain. Experience working in a platform team building reusable ML components for wider adoption. Exposure to multi-agent systems, autonomous decision-making, or reinforcement learning is a plus. Understanding of Responsible AI principles including bias detection, explainability, and hallucination mitigation.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Pune

Work from Office

Description: Hiring Data Engineer with AWS or GCP Cloud Requirements: Role Summary: The Data Engineer will be responsible for designing, implementing, and maintaining the data infrastructure and pipelines necessary for AI/ML model training and deployment. They will work closely with data scientists and engineers to ensure data is clean, accessible, and efficiently processed Required Experience: • 6-8 years of experience in data engineering, ideally in financial services. • Strong proficiency in SQL, Python, and big data technologies (e.g., Hadoop, Spark). • Experience with cloud platforms (e.g., AWS, Azure, GCP) and data warehousing solutions. • Familiarity with ETL processes and tools. • Knowledge of data governance, security, and compliance best practices. Job Responsibilities: Key Responsibilities: • Build and maintain scalable data pipelines for data collection, processing, and analysis. • Ensure data quality and consistency for training and testing AI models. • Collaborate with data scientists and AI engineers to provide the required data for model development. • Optimize data storage and retrieval to support AI-driven applications. • Implement data governance practices to ensure compliance and security. What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree!! About the job Are you looking for a new career challenge? With LTIMindtree, are you ready to embark on a data-driven career? Working for global leading manufacturing client for providing an engaging product experience through best-in-class PIM implementation and building rich, relevant, and trusted product information across channels and digital touchpoints so their end customers can make an informed purchase decision will surely be a fulfilling experience. Location: Pan India . Key Skill : Hadoop-Spark SparkSQL – Java Interested candidates kindly apply in below link and share updated cv to Hemalatha1@ltimindtree.com https://forms.office.com/r/zQucNTxa2U Skills needed: 1. Hand-on Experience on Java and Big data Technology including Spark. Hive, Impala 2. Experience with Streaming Framework such as Kafka 3. Hands-on Experience with Object Storage. Should be able to develop data Archival and retrieval patters 4. Good to have experience of any Public platform like AWS, Azure, GCP etc. 5. Ready to upskill as and when needed on project technologies viz Abinitio Why join us? Work in industry leading implementations for Tier-1 clients Accelerated career growth and global exposure Collaborative, inclusive work environment rooted in innovation Exposure to best-in-class automation framework Innovation first culture: We embrace automation, AI insights and clean data Know someone who fits this perfectly? Tag them – let’s connect the right talent with right opportunity DM or email to know more Let’s build something great together

Posted 1 month ago

Apply

7.0 - 11.0 years

20 - 25 Lacs

Noida, Kolkata, Pune

Work from Office

Proficient in application, data, and infrastructure architecture disciplines. Advanced knowledge of architecture, design, and business processes. Hands-on experience with AWS. Proficiency in modern programming languages such as Python and Scala. Expertise in Big Data technologies like Hadoop, Spark, and PySpark. Experience with deployment tools for CI/CD, such as Jenkins. Design and develop integration solutions involving Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions. Apply system development lifecycle methodologies, such as Waterfall and Agile. Understand and implement data architecture and modeling practices, including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional modeling, and metadata modeling. Utilize knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development, and Big Data solutions. Work collaboratively in teams to develop meaningful relationships and achieve common goals. Strong analytical skills with deep expertise in SQL. Solid understanding of Big Data concepts, particularly with Spark and PySpark/Scala. Experience with CI/CD using Jenkins. Familiarity with NoSQL databases. Excellent communication skills.

Posted 1 month ago

Apply

4.0 - 9.0 years

7 - 8 Lacs

Bengaluru

Work from Office

Hiring for Snowflake Developer for Bangalore/Chennai The individual must have 4+ years of diversified experience developing applications using java languagewith 1+ years of experience in Big data technologies such as Spark, Kafka/Spark Streaming, HBase, Hive,Oozie, Knox, Hadoop (Hortonworks) and related eco system.Mandatorty skills: Core Java, Spark, Kafka, Hive, SQLRelevant Experience in BIG DataStrong Development ExperienceStrong experience in SPARK SCALA HIVEStrong experience in HIVEExperience in Hadoop DevelopmentExperience in Data loading Tools like SQOOPKnowledge of workflow/ Schedulers like Oozie, AirflowProven understanding with Hadoop, HBase, SQOOPExperience in AWS or Azure Good to have skills Spark, Scala, Oozie, HIVE, Shell script, Jenkins, Ansible, Github, Nifi, Elastic, KIBANA, GRAFANA, Kafka.Exp in KAFKAGood experience in Talend ETL toolExperience in creating and consuming RESTful web services.Knowledge with building stream-processing systems using solutions such as Spark-Streaming, KafkaKnowledge of principles and components of Big Data processing and analytics highly desired.

Posted 1 month ago

Apply

4.0 - 5.0 years

7 - 11 Lacs

Ahmedabad

Work from Office

AI/ML Engineer 03 Job Opening 4 to 5 Years Exp About This Role Design and implement machine learning models and AI solutions, optimize algorithms, and integrate intelligent systems into applications. Work on data processing, model training, and deployment to solve real-world challenges using advanced technologies. Desired Skill: Experience: Minimum 4 to 5 years of experience Job Specification: Design, develop, and implement advanced AI/ML algorithms and models. Collaborate with cross-functional teams to understand business needs and translate them into technical requirements. Research and explore new AI/ML techniques and technologies. Develop and maintain scalable and efficient AI/ML infrastructure. Train and optimize AI/ML models on large datasets. Evaluate and improve the performance of AI/ML models. Stay up-to-date with the latest trends and developments in the field of AI/ML. Additional Roles and Competencies: Bachelor s or Master s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. 3-4 years of hands-on experience in AI/ML development. Strong programming skills in Python. Expertise in machine learning algorithms and techniques (e.g., supervised learning, unsupervised learning, deep learning). Experience with Chatbot applications. Experience with popular AI/ML frameworks and libraries (e.g., TensorFlow, PyTorch, Keras). Familiarity with cloud platforms (e.g., AWS, GCP, Azure). Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Passion for AI/ML and a desire to make a significant impact. Preferred Skills: Experience with natural language processing (NLP) or computer vision. Knowledge of big data technologies (e.g., Hadoop, Spark). Experience with MLOps and DevOps practices. Publications or contributions to open-source AI/ML projects. Qualifications BCA, MCA, BE-IT, BSC-IT(AI/ML), MSC-IT(AI/ML), B.SC-Computers, M.SC-Computer, B.Tech-Computer, M.Tech-Computer, BSC-Mathematics, MSC-Mathematics.

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Responsibilities: Analyze structured and unstructured data. Build and deploy machine learning models for text classification task. Develop visualization tools and dashboards to communicate findings to business partners. Collaborate with business, product and engineering teams to clearly articulate data insights into actionable recommendations. Skills: Bachelor s or Master s degree in Computer Science, Statistics, Mathematics, Data Science, or related field. 3+ years of experience in a data science or analytics role. Hands on experience in GCP, AWS or Azure cloud platform. Familiarity with GCP Vertex AI is a plus. Strong ability in developing and deploying ML solutions. Proficient in Python, scikit learn, PyTorch, and TensorFlow, with expertise in machine learning, NLP, and deep learning model development and optimization. Strong understanding of data patterns, feature engineering, and model lifecycle from design to validation, tuning, and scaling. Exposure to big data tools (e.g. BigQuery, Spark) and MLOps practices for scalable, production-grade ML pipelines.

Posted 1 month ago

Apply

3.0 - 8.0 years

8 - 9 Lacs

Bengaluru

Work from Office

Amazon IN Platform Development team is looking to hire a rock star Data/BI Engineer to build for pan Amazon India businesses. Amazon India is at the core of hustle @ Amazon WW today and the team is charted with democratizing data access for the entire marketplace & add productivity. That translates to owning the processing of every Amazon India transaction, for which the team is organized to have dedicated business owners & processes for each focus area. The BI Engineer will play a key role in contributing to the success of each focus area, by partnering with respective business owners and leveraging data to identify areas of improvement & optimization. He / She will build deliverables like business process automation, payment behavior analysis, campaign analysis, fingertip metrics, failure prediction etc. that provide edge to business decision making AND can scale with growth. The role sits in the sweet spot between technology and business worlds AND provides opportunity for growth, high business impact and working with seasoned business leaders. An ideal candidate will be someone with sound technical background in data domain storage / processing / analytics, has solid business acumen and a strong automation / solution oriented thought process. Will be a self-starter who can start with a business problem and work backwards to conceive & devise best possible solution. Is a great communicator and at ease on partnering with business owners and other internal / external teams. Can explore newer technology options, if need be, and has a high sense of ownership over every deliverable by the team. Is constantly obsessed with customer delight & business impact / end result and gets it done in business time. Design, implement and support an data infrastructure for analytics needs of large organization Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies Be enthusiastic about building deep domain knowledge about Amazon s business. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. Enjoy working closely with your peers in a group of very smart and talented engineers. Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency About the team India Data Engineering and Analytics (IDEA) team is central data engineering team for Amazon India. Our vision is to simplify and accelerate data driven decision making for Amazon India by providing cost effective, easy & timely access to high quality data. We achieve this by building UDAI (Unified Data & Analytics Infrastructure for Amazon India) which serves as a central data platform and provides data engineering infrastructure, ready to use datasets and self-service reporting capabilities. Our core responsibilities towards India marketplace include a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dashboarding d) empowering business with self-service tools for deep dives & insights seeking. 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)

Posted 1 month ago

Apply

3.0 - 8.0 years

45 - 55 Lacs

Bengaluru

Work from Office

If you are excited by the opportunity to build highly innovative, scalable , high-performance and low latency solutions for digital advertisement, this may be the right career move for you. Be part of an innovative and passionate team, responsible for building a successful, sustainable and strategic international business for Amazon, from the ground up. As a Software engineer on the team: You will get an opportunity to work on building large-scale machine-learning infrastructure for online recommendation, ads ranking, personalization, and search. You will work on Big Data technologies such as AWS, Spark, Hive, Lucene/SOLR, Elastic search etc. You will drive appropriate technology choices for the business, lead the way for continuous innovation, and shape the future of India Advertising . You will follow the technical direction of our offerings and solutions, working with many different technologies across the India Advertising organization. You will code, troubleshoot, and support high volume and low latency distributed systems. What you create is also what you own. You will participate in implementing key functional areas of the customer experience, including website applications and platform services. You will work with the business team and project managers to convert functional requirements into technical features You will own operating our products, driving excellence in feature stability, performance and flexibility. 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelors degree in computer science or equivalent

Posted 1 month ago

Apply

3.0 - 8.0 years

45 - 55 Lacs

Bengaluru

Work from Office

If you are excited by the opportunity to build highly innovative, scalable , high-performance and low latency solutions for digital advertisement, this may be the right career move for you. Be part of an innovative and passionate team, responsible for building a successful, sustainable and strategic international business for Amazon, from the ground up. As a Software engineer on the team: You will get an opportunity to work on building large-scale machine-learning infrastructure for online recommendation, ads ranking, personalization, and search. You will work on Big Data technologies such as AWS, Spark, Hive, Lucene/SOLR, Elastic search etc. You will drive appropriate technology choices for the business, lead the way for continuous innovation, and shape the future of India Advertising . You will follow the technical direction of our offerings and solutions, working with many different technologies across the India Advertising organization. You will code, troubleshoot, and support high volume and low latency distributed systems. What you create is also what you own. You will participate in implementing key functional areas of the customer experience, including website applications and platform services. You will work with the business team and project managers to convert functional requirements into technical features You will own operating our products, driving excellence in feature stability, performance and flexibility. 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelors degree in computer science or equivalent

Posted 1 month ago

Apply

3.0 - 5.0 years

13 - 17 Lacs

Noida, Chennai, Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line AI & Automation Service Line Responsibilities Data collection and preparationCollect and prepare data for training and evaluating LLMs. This may involve cleaning and processing text data, or creating synthetic data Model developmentDesign and implement LLM models. This may involve choosing the right architecture, training the model, and tuning the hyperparameters Model evaluationEvaluate the performance of LLM models. This may involve measuring the accuracy of the model on a held-out dataset, or assessing the quality of the generated text Model deploymentDeploy LLM models to production. This may involve packaging the model, creating a REST API, and deploying the model to a cloud computing platform. Responsible AIShould have proficient knowledge in Responsible AI and Data Privacy principles to ensure ethical data handling, transparency, and accountability in all stages of AI development. Must demonstrate a commitment to upholding privacy standards, mitigating bias, and fostering trust within data-driven initiatives. Experience in working with ML toolkit like R, NumPy, MatLab etc.. Experience in data mining, statistical analysis and data visualization Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Technical and Professional : Primary skills:Technology-Artificial Intelligence -Generative AI-Computer Vision,Technology-Big Data-Natural language processing(NLP),Technology-Machine Learning-Python Preferred Skills: Technology-Big Data-Natural language processing(NLP) Technology-Machine Learning-Python Technology-Artificial Intelligence-Computer Vision Technology-Machine Learning-Generative AI

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Pune

Work from Office

Big Data Developer JD: Overall 4+ year of experience. Data ingestion/ETL work using big data technologies - Apache framework / Hadoop ecosystem. ETL experience with data pipeline design to source data from various sources (SQL Server/SSIS, Python, Pyspark). IBM TM1 / Turbointegrator / Planning Analytics Workspace experience a plus.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Big Data Developer - 2 resource Extensive Experience in Hadoop, Hive, HBase and Spark. Hands-on Development Experience in Java and Spark with Scala Strong Java programming concepts and clear design patterns understanding. Experienced in implementing data munging, transformation and processing solutions using Spark. Note: (a) Location: BANGALORE (Till COVID situation work from where you are) (b) Minimum of 3 to 5 years of work experience across multiple projects

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Mumbai

Work from Office

We are looking for a hands on technofunctional quality engineer capable of leading and supporting critical critical multi-year initiatives on the TaxSmart platform. The individual should be able to work with the team and the stakeholders to build a plan for these initiatives, think through the various flavors of testing that might be needed to ensure platform delivery, quality and stability. Additionally, they should be able to drive multiyear quality initiatives around enhancing the QE team s capabilities to achieve specific KRAs around F2B testing, defect metrics, insprint automation, reusability in frameworks, prod issue leakage, fungibility and cross-skilling etc Key Responsibilities Design, develop, and maintain reusable automated frameworks, tooling and utilities Build CI automation capability Collaborate with cross-functional teams and business users to understand application requirements and ensure comprehensive test coverage. Write and test features developed manually and then invest in inspirint automation Collaborate with the release and devOps teams to support pre and post deployment tests Collaborate with the business users, tradeOps team, vendors etc to support UAT needs as applicable People management for around 3-4 technofunctional QEs Required Skills 6+ years of experience in quality having run complex initiatives in fintech organizations, preferably in the asset, wealth management domain Good understanding of AWM functioning and trading Strong programming skills in Python, NodeJS, Java, Groovy, SQL, big data and an appetite to be able work on diverse technology at scale Experience with version control systems (e.g., Git) and CI/CD tools. Excellent problem-solving, communication, collaboration and presentation skills to work effectively in a team environment Good understanding of industry best practices around QE and dev around SDLC, metrics, release management etc Excellent leadership skills to be able to coach and groom for individuals in the team We are looking for a hands on technofunctional quality engineer capable of leading and supporting critical critical multi-year initiatives on the TaxSmart platform. The individual should be able to work with the team and the stakeholders to build a plan for these initiatives, think through the various flavors of testing that might be needed to ensure platform delivery, quality and stability. Additionally, they should be able to drive multiyear quality initiatives around enhancing the QE team s capabilities to achieve specific KRAs around F2B testing, defect metrics, insprint automation, reusability in frameworks, prod issue leakage, fungibility and cross-skilling etc Key Responsibilities Design, develop, and maintain reusable automated frameworks, tooling and utilities Build CI automation capability Collaborate with cross-functional teams and business users to understand application requirements and ensure comprehensive test coverage. Write and test features developed manually and then invest in inspirint automation Collaborate with the release and devOps teams to support pre and post deployment tests Collaborate with the business users, tradeOps team, vendors etc to support UAT needs as applicable People management for around 3-4 technofunctional QEs Required Skills 6+ years of experience in quality having run complex initiatives in fintech organizations, preferably in the asset, wealth management domain Good understanding of AWM functioning and trading Strong programming skills in Python, NodeJS, Java, Groovy, SQL, big data and an appetite to be able work on diverse technology at scale Experience with version control systems (e.g., Git) and CI/CD tools. Excellent problem-solving, communication, collaboration and presentation skills to work effectively in a team environment Good understanding of industry best practices around QE and dev around SDLC, metrics, release management etc Excellent leadership skills to be able to coach and groom for individuals in the team

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad, Pune

Work from Office

Extensive experience in designing, developing, and deploying ETL/ELT Big Data pipelines using Azure Data Factory and Azure Databricks. Strong hands-on experience in SQL development with deep knowledge of optimization and tuning techniques in Azure DB, Synapse, and Azure Databricks. Experience in building Delta Lake and scalable, high-performing distributed systems. Proven expertise in developing and integrating Databricks notebooks using Spark and deploying jobs on Azure Cloud. Familiarity with Azure DevOps processes, including CI/CD pipelines, and code management via GitHub is preferable. Strong work ethic, excellent communication skills, and effective time management, with the ability to collaborate across diverse teams and stakeholders.

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Mumbai

Work from Office

Are you ready to lead in the dynamic world of data management? We are seeking a visionary Vice President of Data Management in Product Development to join our highly visible data organization. This is your chance to promote innovation, shape strategic data initiatives, and make a significant impact on our business. As a key leader, youll transform data into actionable insights, empowering teams and steering the organization towards data excellence. If you are a strategic thinker with a passion for data and proven leadership, join us in redefining the future of data management. As a Vice President in the Product Development team, you will be conducting data analysis, creating, migrating and enhancing data for strategic and operational improvement initiatives. from sourcing of new data, ingestion of those data feeds, quality rules for quality assurance and creative solutions to streamline and continue to improve at every step. This role involves leading projects and programs that support AI/ML development, ensuring the organization leverages data to drive innovation and business growth. The VP will collaborate with cross-functional teams to define and execute the data strategy, ensuring alignment with business objectives. Strong project management skills are essential to drive initiatives forward, and great presentation skills are required to tell compelling stories through data insights, engaging stakeholders and influencing business strategies. Job Responsibilities Craft compelling narratives using data insights to engage and inform stakeholders. Deliver presentations that effectively communicate findings and recommendations, influencing business strategies. Creating comprehensive project plans that outline scope, objectives, timelines, and deliverables. Documenting project requirements, milestones, and resource allocations to ensure clarity and alignment among stakeholders. Maintain detailed records of project progress, changes, and outcomes, providing transparency and accountability throughout the project lifecycle. Lead cross-functional teams in the execution of data projects and programs, ensuring timely delivery and quality outcomes. Manage resources, budgets, and timelines effectively to achieve project goals. Work closely with business leaders, IT, and other departments to ensure data initiatives align with organizational priorities. Communicate data strategy and product updates to stakeholders, ensuring transparency and alignment. Establish and enforce data governance policies and practices to ensure data quality, security, and compliance. Accelerate data analysis and issue resolution through effective program governance, process/data mapping and data deep-dives Required Qualifications, Capabilities, and Skills Bachelors degree and minimum 8 years in a Data related role, or in a Change/Transformation role or related field. Excellent communication, presentation (both verbal and written) & influencing skills - candidate will be dealing with stakeholders of varying degrees of seniority across Corporate/Firmwide, Finance, Risk, Compliance, Operations and Technology teams Strong metrics driven mindset to track progress, drive action and communicate and manage effectively Experience in Program Management, Data Quality or Control organization and a track record of delivering complex projects. Experience in producing PowerPoint presentations for senior audiences A passion for data, data skills, BIG Data and data tools. Expertise in working with data to query, be curious with data and have a sensible regard around data accuracy. Data interrogation and Quality checking. Ability to work collaboratively with diverse teams. Are you ready to lead in the dynamic world of data management? We are seeking a visionary Vice President of Data Management in Product Development to join our highly visible data organization. This is your chance to promote innovation, shape strategic data initiatives, and make a significant impact on our business. As a key leader, youll transform data into actionable insights, empowering teams and steering the organization towards data excellence. If you are a strategic thinker with a passion for data and proven leadership, join us in redefining the future of data management. As a Vice President in the Product Development team, you will be conducting data analysis, creating, migrating and enhancing data for strategic and operational improvement initiatives. from sourcing of new data, ingestion of those data feeds, quality rules for quality assurance and creative solutions to streamline and continue to improve at every step. This role involves leading projects and programs that support AI/ML development, ensuring the organization leverages data to drive innovation and business growth. The VP will collaborate with cross-functional teams to define and execute the data strategy, ensuring alignment with business objectives. Strong project management skills are essential to drive initiatives forward, and great presentation skills are required to tell compelling stories through data insights, engaging stakeholders and influencing business strategies. Job Responsibilities Craft compelling narratives using data insights to engage and inform stakeholders. Deliver presentations that effectively communicate findings and recommendations, influencing business strategies. Creating comprehensive project plans that outline scope, objectives, timelines, and deliverables. Documenting project requirements, milestones, and resource allocations to ensure clarity and alignment among stakeholders. Maintain detailed records of project progress, changes, and outcomes, providing transparency and accountability throughout the project lifecycle. Lead cross-functional teams in the execution of data projects and programs, ensuring timely delivery and quality outcomes. Manage resources, budgets, and timelines effectively to achieve project goals. Work closely with business leaders, IT, and other departments to ensure data initiatives align with organizational priorities. Communicate data strategy and product updates to stakeholders, ensuring transparency and alignment. Establish and enforce data governance policies and practices to ensure data quality, security, and compliance. Accelerate data analysis and issue resolution through effective program governance, process/data mapping and data deep-dives Required Qualifications, Capabilities, and Skills Bachelors degree and minimum 8 years in a Data related role, or in a Change/Transformation role or related field. Excellent communication, presentation (both verbal and written) & influencing skills - candidate will be dealing with stakeholders of varying degrees of seniority across Corporate/Firmwide, Finance, Risk, Compliance, Operations and Technology teams Strong metrics driven mindset to track progress, drive action and communicate and manage effectively Experience in Program Management, Data Quality or Control organization and a track record of delivering complex projects. Experience in producing PowerPoint presentations for senior audiences A passion for data, data skills, BIG Data and data tools. Expertise in working with data to query, be curious with data and have a sensible regard around data accuracy. Data interrogation and Quality checking. Ability to work collaboratively with diverse teams.

Posted 1 month ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Bengaluru

Work from Office

We have an exciting opportunity for you to lead impactful data science projects and advance your career. As a Data Scientist Lead within the Focused Analytics Solutions Team, you will design, build, and deliver analytical solutions that drive business outcomes. You will be part of a dynamic team that values creativity, problem-solving, and client interaction. Job Responsibilities Lead teams of data scientists to deliver end-to-end analytical projects. Serve as a trusted advisor throughout project lifecycles. Guide teams in synthesizing findings for clients and executives. Establish and manage relationships with internal clients. Recruit, develop, and retain talent through open communication. Set standards of excellence for the team. Maintain a rigorous controls environment for accurate results. Required Qualifications, Capabilities, and Skills 10+ years of industry experience in data science or business analytics. Experience leading project teams and mentoring talent. Knowledge of statistical, data science, and machine learning methodologies. Excellent communication skills for conveying complex information. Familiarity with various analytical project types. Proficiency in SQL, Python, Tableau, and big data technologies. Master s degree in a relevant quantitative field. Preferred Qualifications, Capabilities, and Skills Advanced degree in a relevant quantitative field. Direct people management experience. Financial services experience. We have an exciting opportunity for you to lead impactful data science projects and advance your career. As a Data Scientist Lead within the Focused Analytics Solutions Team, you will design, build, and deliver analytical solutions that drive business outcomes. You will be part of a dynamic team that values creativity, problem-solving, and client interaction. Job Responsibilities Lead teams of data scientists to deliver end-to-end analytical projects. Serve as a trusted advisor throughout project lifecycles. Guide teams in synthesizing findings for clients and executives. Establish and manage relationships with internal clients. Recruit, develop, and retain talent through open communication. Set standards of excellence for the team. Maintain a rigorous controls environment for accurate results. Required Qualifications, Capabilities, and Skills 10+ years of industry experience in data science or business analytics. Experience leading project teams and mentoring talent. Knowledge of statistical, data science, and machine learning methodologies. Excellent communication skills for conveying complex information. Familiarity with various analytical project types. Proficiency in SQL, Python, Tableau, and big data technologies. Master s degree in a relevant quantitative field. Preferred Qualifications, Capabilities, and Skills Advanced degree in a relevant quantitative field. Direct people management experience. Financial services experience.

Posted 1 month ago

Apply

4.0 - 8.0 years

45 - 50 Lacs

Mumbai

Work from Office

Solution and deliver analytics offerings for the bank to drive revenue or enable cost optimization. Build models, move them to production and maintain/ enhance on an ongoing basis in production. Become an analytics consultant and evangelist within the bank finding analytical solutions to business problems. Position Responsibilities Work closely with the data warehouse team/other business teams to obtain relevant data for implementation. Be comfortable working with structured / unstructured data sources and be conversant to perform secondary research to explore third party data sources to enrich existing data Support the overall digital acquisition strategy by focusing on segmenting/ predicting response rates for leads which are a pre-requisite for improving response rates Create/supervise building of models around channel migration ,cross sell , upsell and support the overall customer engagement strategy Support the implementation of various technology(recommendation engine, campaign management solution, CRM) /data enablers(Creation of data sets ,mart etc) for the analytics practice within the bank Implementation of specific use cases on big data platforms . Essential Graduate (B.E /B.Sc Stats/M.Sc Stats or equivalent) 3+ years in the analytics space Managed diverse stakeholders from various teams, in complex environments Grasp of basic Supervised/ Unsupervised ML algorithms and a demonstrated ability to learn quickly Thorough understanding of banking domain would be a plus point Experience in working with SQL & R/other similar statistical programming languages .Knowledge of other statistical programming languages like Python will be an added advantage Attribute Team player, detail oriented, self-motivated individual Candidate should have a strong understanding of analytical modeling techniques and statistical concepts that are relevant to the application and evaluation of models.

Posted 1 month ago

Apply

3.0 - 7.0 years

12 - 16 Lacs

Pune

Work from Office

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another Document solution architectures, design decisions, implementation details, and lessons learned. Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred technical and professional experience Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus Demonstrate a growth mindset to understand clients' business processes and challenges

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Mumbai

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 1 month ago

Apply

2.0 - 7.0 years

8 - 12 Lacs

Bengaluru

Work from Office

As a Technical Consultant at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a highly skilled and experienced IIoT (Industrial Internet of Things) Engineer with a strong focus on OPC UA (Unified Architecture) and companion specifications, as well as expertise in edge computing. The ideal candidate will play a crucial role in designing, implementing, and maintaining IIoT solutions that enhance the efficiency, reliability, and security of industrial operations Document solution architectures, design decisions, implementation details, and lessons learned. Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred technical and professional experience Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus Demonstrate a growth mindset to understand clients' business processes and challenges

Posted 1 month ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the client’s needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our client’s business requirements Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design, develop, and maintain Ab Initio graphs for extracting, transforming, and loading (ETL) data from diverse sources to various target systems. Implement data quality and validation processes within Ab Initio. Data Modelling and Analysis. Collaborate with data architects and business analysts to understand data requirements and translate them into effective ETL processes. Analyse and model data to ensure optimal ETL design and performance. Ab Initio Components, Utilize Ab Initio components such as Transform Functions, Rollup, Join, Normalize, and others to build scalable and efficient data integration solutions. Implement best practices for reusable Ab Initio components Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization. Conduct performance tuning and troubleshooting as needed. Collaboration. Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes. Participate in design reviews and provide technical expertise to enhance overall solution quality. Documentation

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies