Jobs
Interviews

3301 Big Data Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 5.0 years

2 - 6 Lacs

Vadodara

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Patna

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Nagpur

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Pune

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

2 - 6 Lacs

Patna

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Pimpri-Chinchwad

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

2 - 6 Lacs

Pimpri-Chinchwad

Work from Office

Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Thane

Work from Office

Responsibilities - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Required Skills - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Skills - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.

Posted 1 week ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Design and optimize batch/streaming data pipelines using Scala, Spark, and Kafka Implement real-time tokenization/cleansing microservices in Java Manage production workflows via Apache Airflow (batch scheduling) Conduct root-cause analysis of data incidents using Spark/Dynatrace logs Monitor EMR clusters and optimize performance via YARN/Dynatrace metrics Ensure data security through HashiCorp Vault (Transform Secrets Engine) Validate data integrity and configure alerting systems Requirements Technical Requirements Programming :Scala (Spark batch/streaming), Java (real-time microservices) Big Data Systems: Apache Spark, EMR, HDFS, YARN resource management Cloud & Storage :Amazon S3, EKS Security: HashiCorp Vault, tokenization vs. encryption (FPE) Orchestration :Apache Airflow (batch scheduling) Operational Excellence Spark log analysis, Dynatrace monitoring, incident handling, data validation Mandatory Competencies Expertise in distributed data processing (Spark on EMR/Hadoop) Proficiency in shell scripting and YARN job management Ability to implement format-preserving encryption (tokenization solutions) Experience with production troubleshooting (executor logs, metrics, RCA)

Posted 1 week ago

Apply

3.0 - 8.0 years

50 - 55 Lacs

Hyderabad

Work from Office

Are you interested in building high-performance, scalable financial systems that support Amazons current and future growthAre you looking for ways to invent newer and simpler ways of building solutionsIf so, we are looking for you to fill a challenging position in Amazon Finance Technology team. Amazon Finance Technology Team is looking for a Software Development Engineer, who can help us create the next generation of distributed, scalable financial systems. Our ideal candidate thrives in a fast-paced environment, enjoys the challenge of highly complex business contexts that are typically being defined in real-time. We need someone to design and develop services that facilitate global financial transactions worth billions of dollars annually. As a Software Development Engineer, you will help solve a variety of technical challenges and build highly scalable solutions to solve unique problems for worldwide accounting/finance teams. You will work on big data problems making use of AWS services, design enterprise scaled systems, develop and deploy highly scalable and reliable distributed services. You will tackle challenging, novel situations every day. Along the way, we guarantee that you will learn a ton, have fun and make a huge impact. 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Bachelors degree or equivalent 3+ years of programming with at least one software programming language experience 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience

Posted 1 week ago

Apply

3.0 - 5.0 years

20 - 25 Lacs

Gurugram

Work from Office

Job Title: Associate Vice President - Data Science Work Type: Permanent Location: DLF Downtown - Gurgaon It s more than a career at NAB. It s about more meaningful work, more global opportunities and more innovation beyond boundaries . Your job is just one part of your life. When you bring your ideas, energy, and hunger for growth, you ll be recognised and rewarded for your contribution in return. You ll have our support to excel for our customers, deliver positive change for our communities and grow your career. NAB has established NAB Innovation Centre India as a centre for operations and technology excellence to support NAB deliver faster, better, and more personalized experience to customers and colleagues. At NAB India, we re ramping-up and growing at a very fast pace. Our passionate leaders recruit and develop high performing people, empowering them to deliver exceptional outcomes to make a positive difference in the lives of our customers and our communities. YOUR NEW ROLE : Interacts with product and service teams to identify questions and issues for data analysis and experiments. Develops and codes software programs, algorithms and automated processes to cleanse, integrate and evaluate large data sets from multiple disparate sources. Providing hands-on support as required in formulating a coherent cross-business approach and strategic/tactical plan for big data initiatives. Learning, adopting and leveraging data science best practice to delivery quantitative improvements to the analytics and process modelling functions. Working with massive and complex data sets from multiple sources, utilising big data tools and techniques for the purposes of analysing, providing insight and validating hypotheses. Performing deep dive analyses of experiments through reliable modelling methods that include numerous explanatory variables and covariates. Translating analytical insights into concrete, actionable recommendations for business, process or product improvements. Making recommendations for the collection of new data or the refinement of existing data sources and storage. Developing best practice guidelines for instrumentation and experimentation. WHAT YOU WILL BRING Ability to manipulate and analyse complex, high-volume, high dimensionality data and metadata from varying sources. Strong passion for empirical research and for answering hard questions with data. Expert knowledge of analysis tools and big data technologies (Map/Reduce, Hadoop, Hive, etc). Familiarity with relational/non-relational data manipulation, machine learning, and scientific statistical analysis. Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner. Flexible analytical approach that allows for results at varying levels of precision. Solid understanding and experience with programming logic and various paradigms. At least 3 - 5 years experience in a data science environment (experience may be corporate, research/government or academia) coupled with tertiary qualifications to a Masters or PhD level in a relevant technical field. A diverse and inclusive workplace works better for everyone: Our goal is to foster a culture that fills us with pride, rooted in trust and respect. NAB is committed to creating a positive and supportive environment where everyone is encouraged to embrace their true, authentic selves. A diverse and inclusive workplace where our differences are celebrated, and our contributions are valued. It s a huge part of what makes NAB such a special place to be. More focus on you: We re committed to delivering a positive experience for our colleagues and a workplace you can be proud of. We support our colleagues to balance their careers and personal life through flexible working arrangements such as hybrid working and job sharing and competitive financial and lifestyle benefits. We invest in our colleagues through world class development programs (Distinctive Leadership and Career Qualified in Banking), and empower you to learn, grow and pursue exciting career opportunities Join NAB India: This is your chance to join NAB India and along with your experience and expertise to help shape an innovation driven organisation that focuses on making a positive impact in the lives of its customers, colleagues and communities To know more about us please click here To know more about NAB Global Innovation Centres please click here We re on LinkedIn: NAB Innovation Centre India

Posted 1 week ago

Apply

4.0 - 6.0 years

10 - 14 Lacs

Bengaluru

Work from Office

What You Will Need Bachelors/Masters degree in computer science (or similar degrees) 4-7 years of experience as a Data Scienti st in a fast-paced organizati on, preferably B2C Familiarity with Neural Networks, Machine Learning etc. Familiarity with tools like SQL, R, Python, etc. Strong understanding of Stati sti cs and Linear Algebra Strong understanding of hypothesis/model testi ng and ability to identi fy common modeltesti ng errors Experience designing and running A/B tests and drawing insights from them Profi ciency in machine learning algorithms Excellent analyti cal skills to fetch data from reliable sources to generate accurate insights Experience in tech and product teams is a plus Bonus points for: Experience in working on personalizati on or other ML problems Familiarity with Big Data tech stacks like Apache Spark, Hadoop, Redshift Algorithms, Apache Spark, Data Science, Etl, Large Scale, Linear Algebra, Ml, Neural Netwoks, Python, Sql

Posted 1 week ago

Apply

0.0 - 2.0 years

2 - 4 Lacs

Gurugram

Work from Office

Overview of the Business: Credit and Fraud Risk (CFR) team helps drive profitable business growth by reducing the risk of fraud and maintaining industry lowest credit loss rates. It uses an array of tools and ever-evolving technology to detect and combat fraud, minimize the disruption of good spending, and provide a world-class customer experience. The team leads efforts that leverage data and digital advancements to improve risk management as well as enable commerce and drive innovation. A single decision can have many outcomes. And when that decision affects millions of customers, it needs to be the right one. That s where our Credit & Fraud Risk (CFR) Analytics & Data Science CoE team comes in. The team leads efforts that leverage data and digital advancements to improve risk management as well as enable commerce and drive innovation, every day. Right from targeting the right customer for our products to underwriting them to managing their experience with Amex when they get onboarded, every decision is advised by groundbreaking analytics & data science. We help the company grow its business profitably while delivering the worlds best customer experience, all powered by data. We are the backbone of all financial services operations at American Express and impact every aspect of the company. As a part of the team, you ll have the opportunity to work in one of the best companies for data scientists in the country. You will solve real world business problems while getting exposure to the industry s top leaders in analytics, data science and machine learning. If you re passionate about solving complex problems and crafting solutions that impact millions, you should consider a career in CFR. Development, deployment and validation of predictive model(s) and supporting use of models in economic logic to enable profitable decisions across risk, fraud and marketing. Responsibilities: Understand the core business of AXP and the levers behind various decisions Analyze large amounts of data to derive business insights and create innovative solutions Leverage the power of closed loop through Amex network to make decisions more intelligent and relevant Innovate with a focus on developing newer and better approaches using big data & machine learning solution Clear articulation and structuring of business findings across prospect and customer domain to the leadership and key partners Maintain external lens and be aware of developments in the field of Finance/Payments/Analytics etc. Minimum Qualifications MBA, Master s Degree In Economics, Statistics, Computer Science Or related fields 0-18 months of experience in analytics, big data workstreams Ability to drive project deliverables to achieve business results Ability to work effectively in a team environment Strong communication and interpersonal skills Innovative problem solver with ability to learn quickly and work independently with complex, unstructured initiatives Ability to Integrate with Cross-Functional Business Partners Worldwide SAS, R, Python, Hive, Spark, SQL Unsupervised and supervised techniques -: active learning, transfer learning, neural models, Decision trees, reinforcement learning, graphical models, Gaussian processes, Bayesian models, Map Reduce techniques, attribute engineering Preferred Qualifications Expertise in Coding, Algorithm, High Performance Computing Offer of employment with American Express is conditioned upon the successful completion of a background veriication check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Our Mission: 6sense is on a mission to revolutionize how B2B organizations create revenue by predicting customers most likely to buy and recommending the best course of action to engage anonymous buying teams. 6sense Revenue AI is the only sales and marketing platform to unlock the ability to create, manage and convert high-quality pipeline to revenue. Our People: People are the heart and soul of 6sense. We serve with passion and purpose. We live by our Being 6sense values of Accountability, Growth Mindset, Integrity, Fun and One Team. Every 6sensor plays a part in de ning the future of our industry-leading technology. 6sense is a place where difference-makers roll up their sleeves, take risks, act with integrity, and measure success by the value we create for our customers. We want 6sense to be the best chapter of your career. 6sense is seeking a Senior Data Engineer to become part of a team designing, developing, and deploying its customer centric applications. A Data Engineer at 6sense will have the opportunity to Create, validate and maintain optimal data pipelines, assemble large, complex data sets that meet functional / non-functional business requirements. Improving our current data pipelines i.e. improve their performance, remove redundancy, and figure out a way to test before v/s after to roll out. Debug any issues that arise from data pipelines especially performance issues. Experiment with new tools and new versions of hive/presto etc. etc. Required qualifications and must have skills Excellent analytical and problem-solving skills 5+ years of work experience showing growth as a Data Engineer. Strong hands-on experience with Big Data Platforms like Hadoop / Hive / Spark / Kafka Strong experience in writing complex, optimized SQL queries across large data sets Experience with optimizing queries and underlying storage Comfortable with Unix / Linux command line Experience in ETL/ELT and System design BE/BTech/BS or equivalent Exposure to AWS Nice to have Skills Used Key Value stores or noSQL databases Experience with writing Hive / Presto UDFs in Java Good understanding of docker and container platforms like Mesos and Kubernetes Security-first architecture approach Application benchmarking and optimization Basic knowledge of AI Interpersonal Attributes You can work independently as well as part of a team You take ownership of projects and drive them to conclusion You re a good communicator and are capable of not just doing the work, but teaching others and explaining the why behind complicated technical decisions You aren t afraid to roll up your sleeves: This role will evolve over time, and we ll want you to evolve with it! Our Benefits: Full-time employees can take advantage of health coverage, paid parental leave, generous paid time-off and holidays, quarterly self-care days off, and stock options. We ll make sure you have the equipment and support you need to work and connect with your teams, at home or in one of our o ces. We have a growth mindset culture that is represented in all that we do, from onboarding through to numerous learning and development initiatives including access to our LinkedIn Learning platform. Employee well-being is also top of mind for us. We host quarterly wellness education sessions to encourage self care and personal growth. From wellness days to ERG-hosted events, we celebrate and energize all 6sense employees and their backgrounds. Equal Opportunity Employer: 6sense is an Equal Employment Opportunity and Affirmative Action Employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. If you require reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please direct your inquiries to jobs@6sense.com . We are aware of recruiting impersonation attempts that are not affiliated with 6sense in any way. A ll email communications from 6sense will originate from the @6sense.com domain . We will not initially contact you via text message and will never request payments . If you are uncertain whether you have been contacted by an official 6sense employee, reach out to jobs@ 6sense.com

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

We are seeking a skilled Senior Software Engineer to lead the integration of Cloudera Manager metrics with enterprise Grafana using Open Telemetry. This role will also help with design and solutioning for all non-functional requirements (NFR) for our Cloudera Private Cloud platform are met to specification. You will be accountable for design and implementation for the Open Telemetry workstream. To deliver against this you will collaborate with platform and enterprise engineering teams, enterprise architecture, and product owners to ensure delivery against accelerated timelines. Key Responsibilities: - Design and implement the integration of Cloudera Manager metrics into Enterprise Grafana via Open Telemetry. - Develop and document robust monitoring and observability solutions for Cloudera clusters. - Collaborate with other engineers to solution NFRs addressing performance, scalability, security, and availability of our clusters. - Document integration processes, cluster designs and NFR validation results. Required Skills: 5+ years of hands-on experience with Cloudera Manager, JMX metrics and Cloudera cluster implementation or adation 3+ years Python and/or scripting experience related to automation and APIs 3+ years of hands-on expernce integrating cluster metrics with Grafana or similar via Open Telemetry Strong understanding of Open Telemetry architecture and how to deploy and configure Open Telemetry collector Strong understanding of telemetry protocols such as OTLP, Prometheus or StatsD Strong understanding and experience with distributed data platforms and big data eco system (eg. Hadoop, Hive, Spark) Ability to work independently and collaborate effectively within cross functional teams Strong communication and documentation skills Grafana, Python Api, Telemetry

Posted 1 week ago

Apply

9.0 - 12.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Essential Functions Independently responsible for end-to-end planning and execution of one or more medium to high complexity program(s) and/or Agile teams. Day-to-day accountability for tracking scope, schedule, budget, risks, deployments and communications for projects, initiatives, and smaller programs. Effectively communicate and address obstacles delaying projects from delivering against objectives. Effective communication showcased in reporting, notes, documentation, FAQs, escalations, and more. Build effective Partner/Stakeholder relationships thru effective Communication and Facilitation. Support implementation of Visa project methodology required artifacts. Supports review, sizing and estimation of work based on One Pagers, Epics or CA requirements for AOP/TPIC. Manage preparation for Quarterly Business Review and lead Quarterly Planning sessions. Create and be the custodian for living program roadmaps. As Scrum coach, coach multiple teams through the software development life cycle using Agile, Scrum, and Lean practices and work cross-functionally throughout the company to ensure projects are developed and deployed with quality and timely delivery into our production systems. Demonstrated curiosity and ability to understand architectural and technical aspects of Data Platform and/or DevOps projects. Lead role for PM relevant improvement initiatives 9 or more years of relevant work experience with a Bachelor Degree or 7 or more relevant years of experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or 3 or more years of experience with a PhD Bachelors or Masters STEM ma

Posted 1 week ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Bengaluru

Work from Office

In this role, you will lead the design, development, and optimisation of large-scale, distributed data systems that power OCIs critical services. You ll work closely with cross-functional teams to build scalable data infrastructure, real-time analytics pipelines, and intuitive data visualisation tools. We re looking for a highly technical leader with deep expertise in Analytics, Business Intelligence, Data Visualisation, Java, and Microservices. Key Responsibilities: Architect and implement scalable, reliable data solutions using modern cloud-native technologies Collaborate with engineering, product, and operations teams to define and deliver key initiatives Lead the development of intuitive and interactive data visualisations that help users explore and understand complex datasets. Mentor and guide junior engineers, promoting best practices in data engineering and system design Lead architecture reviews, design discussions, and implementation strategies. Preferred Qualifications: 10+ years of experience in software engineering, with at least 5 years in data/analytics-focused roles. Strong proficiency in Java, with solid experience in building and deploying microservices in production environments. Deep understanding of data modeling, ETL/ELT processes, and analytics best practices. Experience with BI tools such as Oracle Analytics Cloud, Tableau, Power BI. Experience building data platforms in a cloud environment (preferably OCI, AWS, Azure, or GCP) Proven ability to lead complex technical projects end-to-end. Strong understanding of RESTful APIs, containerization (Docker/Kubernetes), and DevOps practices. Excellent communication and collaboration skills, with a passion for mentoring and leading technical teams. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. In this role, you ll lead the design and development of scalable, cloud-native data systems and pipelines that power critical OCI services. Were looking for someone with deep experience in distributed systems, big data technologies and a strong background in building data platforms in the cloud. If youre passionate about building high-performance data infrastructure at scale, we d love to hear from you.

Posted 1 week ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.

Posted 1 week ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

: The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsHadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Big Data Tester LocationPune (for Mastercard) Experience Level5-9 years Minimum Skill Set Required / Must Have Python PySpark Testing skills and best practices for data validation SQL (hands-on experience, especially with complex queries) and ETL Good to Have Unix Big Data: Hadoop, Spark, Kafka, NoSQL databases (MongoDB, Cassandra), Hive, etc. Data Warehouse: TraditionalOracle, Teradata, SQL Server Modern CloudAmazon Redshift, Google BigQuery, Snowflake AWS development experience (not mandatory, but beneficial) Best Fit Python + PySpark + Testing + SQL (hands-on) and ETL + Good to Have skills

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer : The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsPyspark, Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc.. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.

Posted 1 week ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer - Scala : Preferred Skills: ===- Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should able to write the query through Jupyter Notebook Orchestration tool like NiFi, AirFlow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.

Posted 1 week ago

Apply

5.0 - 7.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional Requirements: PythonPySparkETLData PipelineBig DataAWSGCPAzureData WarehousingSparkHadoop Preferred Skills: Technology-Big Data-Big Data - ALL

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies