Jobs
Interviews

3301 Big Data Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

3 - 7 Lacs

Bharatpur

Work from Office

Who are you 2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYoure comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools.

Posted 1 week ago

Apply

2.0 - 4.0 years

3 - 7 Lacs

Nellore

Work from Office

Full Time Role at EssentiallySports for Data Growth EngineerEssentiallySports is the home for the underserved fan, delivering storytelling that goes beyond the headlines. As a media platform, we combine deep audience insights with cultural trends, to meet fandom where it lives and where it goes next. ValuesFocus on the user and all else will followHire for Intent and not for ExperienceBootstrapping gives you the freedom to serve the customer and the team instead of investorInternet and Technology untap the nichesAction oriented, integrity, freedom, strong communicators, and responsibilityAll things equal, one with high agency winsEssentiallySports is a top 10 sports media platform in the U. S. , generating over a billion pageviews a year and 30m+ monthly active users per month. This massive traffic fuels our data-driven culture, allowing us to build owned audiences at scale through organic growth—a model we take pride in, with zero CAC. The next phase of ES growth is around newsletter initiative, in less than 9 months, we’ve built a robust newsletter brand with 700,000+ highly engaged readers and impressive performance metrics:5 newsletter brands700k+ subscribersOpen rates of 40%-46%. The role is for a data engineer with growth and business acumen, in the “permissionless growth” team. Someone who can connect the pipelines of millions of users, but at the same time knit a story of the how and why. ResponsibilitiesOwning Data Pipeline from Web to Athena to Email, end-to-endYou’ll make the key decisions and see them through to successful user sign upUse Data Science to find real insights, which translates to user engagementPushing changes every week dayPersonalization at Scale: Leverage fan behavior data to tailor content and improve lifetime value. Who are you?2+ years of professional data engineering experienceSomeone who spends time thinking about business insights as much as they do on engineeringIs a self-starter, and drives initiativesIs excited to pick up AI, and integrate it at various touch pointsYou have strong experience in data analysis, growth marketing, or audience development (media or newsletters? Even better). Have an awareness about Athena, Glue, Jupyter, or intent to pick them upYou’re comfortable working with tools like Google Analytics, SQL, email marketing platforms (Beehiiv is a plus), and data visualization tools. Collaborative and want to see the team succeed in its goalsProblem solving, proactive and solution oriented mindset, to spot opportunities and translate into real growthAbility to thrive in startups with a fast-paced environment and take ownership for working through ambiguityExcited to join a lean team in a big company that moves quickly

Posted 1 week ago

Apply

13.0 - 20.0 years

35 - 70 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Required Skills and Experience 13+ Years is a must with 7+ years of relevant experience working on Big Data Platform technologies. Proven experience in technical skills around Cloudera, Teradata, Databricks, MS Data Fabric, Apache, Hadoop, Big Query, AWS Big Data Solutions (EMR, Redshift, Kinesis, Qlik) Good Domain Experience in BFSI or Manufacturing area . Excellent communication skills to engage with clients and influence decisions. High level of competence in preparing Architectural documentation and presentations. Must be organized, self-sufficient and can manage multiple initiatives simultaneously. Must have the ability to coordinate with other teams independently. Work with both internal/external stakeholders to identify business requirements, develop solutions to meet those requirements / build the Opportunity. Note: If you have experience in BFSI Domain than the location will be Mumbai only If you have experience in Manufacturing Domain the location will be Mumbai & Bangalore only. Interested candidates can share their updated resumes on shradha.madali@sdnaglobal.com

Posted 1 week ago

Apply

1.0 - 4.0 years

15 - 22 Lacs

Hyderabad, Secunderabad

Work from Office

Job Summary We are looking for a fresher MD/PhD with a specialization in Microbiology to join our team as a Clinical Outreach / Scientific Outreach professional. This position requires active field engagement in collaboration with the sales team, including visits to hospitals and clinical institutions to interact with physicians and other healthcare professionals. The candidate will be responsible for effectively communicating the scientific, microbiological, and clinical aspects of our products, ensuring a clear and thorough understanding of their clinical relevance, applications, and value. The candidate will be participating in Continuing Medical Education (CME) programs and Round Table meetings (RTMs). What we want you to do Work closely with the sales team during client visits, primarily engaging with doctors and healthcare providers. Explain the microbiological and clinical aspects of our products in a clear and professional manner. Bridge the gap between scientific knowledge and clinical application to support the adoption of our products. Provide technical support and medical guidance during client meetings and product demonstrations. Help doctors understand how the product integrates into patient care, infection control, and diagnostic workflows. Share relevant case studies, clinical experiences, or infection trends to highlight product effectiveness. Maintain a strong understanding of emerging microbiological trends and technologies, including Next-Generation Sequencing (NGS). Collaborate with internal teams such as R&D, sales, and Operto ensure accurate communication and feedback. Actively participate in Continuing Medical Education (CME) programs and Round Table Meetings (RTMs) What are we looking in you Freshers - Fresher MD/PhD with a specialization in Microbiology Proven track record of effective communication and collaboration with interdisciplinary healthcare teams. Demonstrated understanding of infection control protocols and antimicrobial stewardship principles. Familiarity with molecular and sequencing (NGS) technologies and their applications in clinical microbiology is advantageous. Strong knowledge of clinical microbiology, infectious diseases, and diagnostic methods Excellent verbal communication and presentation skills. Ability to explain complex technical and medical concepts in a simple, clinician-friendly language. Comfortable with on-field client interactions. Must be willing to travel to PAN India for CME programs and RTMs. What you will gain Dynamic and collaborative work environment dedicated to making a meaningfulimpact in healthcare Experience in working with advanced sequencing technology in the diagnostic industry i.e. NGS, WGS, Nanopore, and Illumina. Opportunities for professional development and continued education Competitive salary commensurate with experience Comprehensive health benefits package

Posted 1 week ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Mumbai, Chennai, Bengaluru

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role In this role you will play a key role in Data Strategy - We are looking for a 8+ years experience in Data Strategy (Tech Architects, Senior BAs) who will support our product, sales, leadership teams by creating data-strategy roadmaps. The ideal candidate is adept at understanding the as-is enterprise data models to help Data-Scientists/ Data Analysts to provide actionable insights to the leadership. They must have strong experience in understanding data, using a variety of data tools. They must have a proven ability to understand current data pipeline and ensure minimal cost-based solution architecture is created & must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Identify, design, and recommend internal process improvementsautomating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. & identify data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to create frameworks for digital twins/ digital threads having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders & coordinating with cross functional team ; aka SPOC for global master data Your Profile 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools - Experience with understanding big data toolsHadoop, Spark, Kafka, etc. & experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB & experience with understanding data pipeline and workflow management toolsLuigi, Airflow, etc. 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Postgres/ SQL/ Mongo & 2+ years working knowledge in Data StrategyData Governance/ MDM etc. Having 5+ years of experience in creating data strategy frameworks/ roadmaps, in Analytics and data maturity evaluation based on current AS-is vs to-be framework and in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO's with 2+ years working knowledge in Data StrategyData Governance/ MDM etc. & 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive, cognitive analytics What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Location - Bengaluru,Mumbai,Chennai,Pune,Hyderabad,Noida

Posted 1 week ago

Apply

6.0 - 10.0 years

18 - 25 Lacs

Noida

Work from Office

Job Title : Senior Datawarehouse Developer Location: Noida, India Position Overview: Working with the Finance Systems Manager, the role will ensure that ERP system is available and fit for purpose. The ERP Systems Developer will be developing the ERP system, providing comprehensive day-to-day support, training and develop the current ERP System for the future. Key Responsibilities: As a Sr. DW BI Developer, the candidate will participate in the design / development / customization and maintenance of software applications. As a DW BI Developer, the person should analyze the different applications/Products, design and implement DW using best practices. Rich data governance experience, data security, data quality, provenance / lineage . The candidate will also be maintaining a close working relationship with the other application stakeholders. Experience of developing secured and high-performance web application(s). Knowledge of software development life-cycle methodologies e.g. Iterative, Waterfall, Agile, etc. Designing and architecting future releases of the platform. Participating in troubleshooting application issues. Jointly working with other teams and partners handling different aspects of the platform creation. Tracking advancements in software development technologies and applying them judiciously in the solution roadmap. Ensuring all quality controls and processes are adhered to. Planning the major and minor releases of the solution. Ensuring robust configuration management. Working closely with the Engineering Manager on different aspects of product lifecycle management. Demonstrate the ability to independently work in a fast-paced environment requiring multitasking and efficient time management. Required Skills and Qualifications: End to end Lifecyle of Data warehousing, Data Lakes and reporting Experience with Maintaining/Managing Data warehouses. Responsible for the design and development of a large, scaled-out, real-time, high performing Data Lake / Data Warehouse systems (including Big data and Cloud). Strong SQL and analytical skills. Experience in Power BI, Tableau, Qlikview, Qliksense etc. Experience in Microsoft Azure Services. Experience in developing and supporting ADF pipelines. Experience in Azure SQL Server/ Databricks / Azure Analysis Services. Experience in developing tabular model. Experience in working with APIs. Minimum 2 years of experience in a similar role Experience with Data warehousing, Data modelling. Strong experience in SQL. 2-6 years of total experience in building DW/BI systems. Experience with ETL and working with large-scale datasets. Proficiency in writing and debugging complex SQLs. Prior experience working with global clients. Hands on experience with Kafka, Flink, Spark, SnowFlake, Airflow, nifi, Oozie, Pig, Hive,Impala Sqoop. Storage like HDFS , Object Storage (S3 etc), RDBMS, MPP and Nosql DB. Experience with distributed data management, data sfailover, luding databases (Relational, NoSQL, Big data, data analysis, data processing, data transformation, high availability, and scalability) Experience in end-to-end project implementation in Cloud (Azure / AWS / GCP) as a DW BI Developer. Rich data governance experience, data security, data quality, provenance / lineagHive, Impalaerstanding of industry trends and products in dataops, continuous intelligence, Augmented analytics, and AI/ML. Prior experience of working with Global Clients. Nice to have Skills and Qualifications: Prior experience of working in a start-up culture Prior experience of working in Agile SAFe and PI Planning Prior experience of working in Ed-Tech/E-Learning companies Any relevant DW/BI Certification Working Knowledge of processing huge amount of data , performance tuning, cluster administration, High availability and failover , backup restore. Experience: 6-10 Years experience Educational Qualification(s): Bachelor's/masters degree in computer science, Engineering or equivalent

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.Monitor and analyze key performance metrics (e.g., CTR, CPC, ROAS) to support business objectives Implement real-time data workflows with anomaly detection and performance reporting.Develop and maintain data infrastructure using tools such as Spark, Hadoop, Kafka, and AirflowCollaborate with DevOps teams to deploy data solutions in containerized environments (Docker, Kubernetes).Partner with data scientists to prepare, cleanse, and transform data for modeling.Support the development of predictive models using tools like BigQuery ML and Scikit-learn Work closely with stakeholders across product, design, and executive teams to understand data needs Ensure compliance with data governance, privacy, and security standards. Professional & Technical Skills: 1-2 years of experience in data engineering or a similar role.Familiarity with cloud platforms (AWS, GCP, or Azure) and big data tools (Hive, HBase, Spark).Familiarity with DevOps practices and CI/CD pipelines. Additional InformationThis position is based at our Mumbai office.Masters degree in Computer Science, Engineering, or a related field. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : Google Pub/Sub, GCP Dataflow, Google DataprocMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence to enhance performance and efficiency. Your typical day will involve collaborating with cross-functional teams to design and implement innovative solutions, utilizing advanced technologies such as deep learning and natural language processing. You will also be responsible for analyzing data and refining algorithms to ensure optimal functionality and user experience, while continuously exploring new methodologies to drive improvements in AI applications. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and development of AI-driven applications to meet project requirements.- Collaborate with team members to troubleshoot and resolve technical challenges. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Cloud Machine Learning Services.- Good To Have Skills: Experience with GCP Dataflow, Google Pub/Sub, Google Dataproc.- Strong understanding of machine learning frameworks and libraries.- Experience in deploying machine learning models in cloud environments.- Familiarity with data preprocessing and feature engineering techniques. Additional Information:- The candidate should have minimum 2 years of experience in Google Cloud Machine Learning Services.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Chennai

Work from Office

Interested can also apply with Sanjeevan Natarajan - 94866 21923 sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 1 week ago

Apply

3.0 - 7.0 years

15 - 19 Lacs

Bengaluru

Work from Office

About Netskope Today, theres more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter @Netskope . Sr. Staff Engineer, Data Platform (Job Post title only, do not include in position description) About the role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Platform team is uniquely positioned at the intersection of security, big data and cloud computing. We are responsible for providing ultra low-latency access to global security insights and intelligence data to our customers, and enabling them to act in near real-time. We re looking for a seasoned engineer to help us build next-generation data pipelines that provide near real-time ingestion of security insights and intelligence data using open source messaging and stream processing engines (Kafka, Flink, Spark) to connecting sources to sinks (BigQuery, Mongo, etc), and RESTful APIs that allow programmatic access to the data. Whats in it for you You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What you will be doing Building next generation data pipelines to provide near real-time ingestion of security insights and intelligence data Architecting highly scalable data pipelines and solve real-time stream processing challenges at scale Partnering with industry experts in security and big data, product and engineering teams to conceptualize, design and build innovative solutions for hard problems on behalf of our customers. Evaluating open source technologies to find the best fit for our needs and also contributing to some of them! Helping other teams architect their systems on top of the data platform and influencing their system design. This is a great opportunity to work with smart people in a fun and collaborative environment. Required skills and experience 10+ years of industry experience building highly scalable distributed Data systems Programming experience in Python, Java or Golang Excellent data structure and algorithm skills Proven good development practices like automated testing, measuring code coverage. Experience building real-time stream processing systems using engines like Flink, Spark or similar Experience with distributed datastores like Druid, Mongo, Cassandra, BigQuery or similar Excellent written and verbal communication skills Bonus points for contributions to the open source community Education BSCS or equivalent required, MSCS or equivalent strongly preferred #LI- Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskopes Privacy Policy for more details.

Posted 1 week ago

Apply

12.0 - 17.0 years

50 - 55 Lacs

Bengaluru

Work from Office

The group you ll be a part of In the Global Products Group, we are dedicated to excellence in the design and engineering of Lam s etch and deposition products. We drive innovation to ensure our cutting-edge solutions are helping to solve the biggest challenges in the semiconductor industry. The impact you ll make Join Lam as a Data Scientist, where youll design, develop, and program methods to analyze unstructured and diverse big data into actionable insights. Youll develop algorithms and automated processes to evaluate large data sets from disparate sources. Your expertise in generating, interpreting, and communicating actionable insights enables Lam to make informed and data-driven decisions. What you ll do Who we re looking for Typically requires a minimum of 12 years of related experience with a Bachelor s degree; or 8 years and a Master s degree; or a PhD with 5 years experience; or equivalent experience. Preferred qualifications Our commitment We believe it is important for every person to feel valued, included, and empowered to achieve their full potential. By bringing unique individuals and viewpoints together, we achieve extraordinary results. Lam Research ("Lam" or the "Company") is an equal opportunity employer. Lam is committed to and reaffirms support of equal opportunity in employment and non-discrimination in employment policies, practices and procedures on the basis of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex (including pregnancy, childbirth and related medical conditions), gender, gender identity, gender expression, age, sexual orientation, or military and veteran status or any other category protected by applicable federal, state, or local laws. It is the Companys intention to comply with all applicable laws and regulations. Company policy prohibits unlawful discrimination against applicants or employees. Lam offers a variety of work location models based on the needs of each role. Our hybrid roles combine the benefits of on-site collaboration with colleagues and the flexibility to work remotely and fall into two categories On-site Flex and Virtual Flex. On-site Flex you ll work 3+ days per week on-site at a Lam or customer/supplier location, with the opportunity to work remotely for the balance of the week. Virtual Flex you ll work 1-2 days per week on-site at a Lam or customer/supplier location, and remotely the rest of the time.

Posted 1 week ago

Apply

7.0 - 12.0 years

13 - 14 Lacs

Pune

Work from Office

We are looking to add an experienced and enthusiastic Lead Data Scientist to our Jet2 Data Science team in India. Reporting to the Data Science Delivery Manager , the Lead Data Scientist is a key appointment to the Data Science Team , with responsibility for executing the data science strategy and realising the benefits we can bring to the business by combining insights gained from multiple large data sources with the contextual understanding and experience of our colleagues across the business. In this exciting role, y ou will be joining an established team of 40+ Data Science professionals , based across our UK and India bases , who are using data science to understand, automate and optimise key manual business processes, inform our marketing strategy, and ass ess product development and revenue opportunities and optimise operational costs. As Lead Data Scientist, y ou will have strong experience in leading data science projects and creating machine learning models and be able t o confidently communicate with and enthuse key business stakeholders . A typical day in your role at Jet2TT: A lead data scientist would lead a team of data science team Lead will be responsible for delivering & managing day-to-day activities The successful candidate will be highly numerate with a statistical background , experienced in using R, Python or similar statistical analysis package Y ou will be expected to work with internal teams across the business , to identify and collaborate with stakeholders across the wider group. Leading and coaching a group of Data Scientists , y ou will plan and execute the use of machine learning and statistical modelling tools suited to the identified initiative delivery or discovery problem identified . You will have strong ability to analyse the create d algorithms and models to understand how changes in metrics in one area of the business could impact other areas, and be able to communicate those analyses to key business stakeholders. You will identify efficiencies in the use of data across its lifecycle, reducing data redundancy, structuring data to ensure efficient use of time , and ensuring retained data/information provides value to the organisation and remains in-line with legitimate business and/or regulatory requirements. Your ability to rise above group think and see beyond the here and now is matched only by your intellectual curiosity. Strong SQL skills and the ability to create clear data visualisations in tools such as Tableau or Power BI will be essential . They will also have experience in developing and deploying predictive models using machine learning frameworks and worked with big data technologies. As we aim to realise the benefits of cloud technologies, some familiarity with cloud platforms like AWS for data science and storage would be desirable. You will be skilled in gathering data from multiple sources and in multiple formats with knowledge of data warehouse design, logical and physical database design and challenges posed by data quality. Qualifications, Skills and Experience (Candidate Requirements): Experience in leading small to mid-size data science team Minimum 7 years of experience in the industry & 4+ experience in data science Experience in building & deploying machine learning algorithms & detail knowledge on applied statistics Good understanding of various data architecture RDBMS, Datawarehouse & Big Data Experience of working with regions such as US, UK, Europe or Australia is a plus Liaise with the Data Engineers, Technology Leaders & Business Stakeholder Working knowledge of Agile framework is good to have Demonstrates willingness to learn Mentoring, coaching team members Strong delivery performance, working on complex solutions in a fast-paced environment

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the optimization of data pipelines for improved performance and efficiency.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Strong understanding of ETL processes and data integration techniques.- Experience with data modeling and database design principles.- Familiarity with data quality frameworks and best practices.- Knowledge of cloud data warehousing solutions and architecture. Additional Information:- The candidate should have minimum 3 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Redshift Good to have skills : PySparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Coordinate with stakeholders to gather requirements- Ensure timely delivery of projects Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue- Good To Have Skills: Experience with PySpark- Strong understanding of ETL processes- Experience in data transformation and integration- Knowledge of cloud computing platforms- Ability to troubleshoot and resolve technical issues Additional Information:- The candidate should have a minimum of 5 years of experience in AWS Glue- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Chennai

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Agile Project Management Good to have skills : Apache SparkMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making across the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering practices.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Agile Project Management.- Good To Have Skills: Experience with Apache Spark, Google Cloud SQL, Python (Programming Language).- Strong understanding of data pipeline architecture and design principles.- Experience with ETL tools and data integration techniques.- Familiarity with data quality frameworks and best practices. Additional Information:- The candidate should have minimum 7.5 years of experience in Agile Project Management.- This position is based in Chennai (Mandatory).- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data pipeline architecture and design.- Experience with ETL processes and data integration techniques.- Familiarity with data quality frameworks and best practices.- Knowledge of cloud platforms and services related to data analytics. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Indore office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Analyze and troubleshoot data-related issues to ensure optimal performance of data solutions.- Collaborate with stakeholders to gather requirements and translate them into technical specifications. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling concepts and database design.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms such as AWS or Azure for data storage and processing.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with team members to enhance data workflows and contribute to the overall efficiency of data management practices. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the optimization of data processing workflows to enhance performance.- Collaborate with cross-functional teams to gather requirements and deliver data solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud platforms and services for data storage and processing.- Knowledge of data governance and data quality best practices. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Project Role : Data Science Practitioner Project Role Description : Formulating, design and deliver AI/ML-based decision-making frameworks and models for business outcomes. Measure and justify AI/ML based solution values. Must have skills : Machine Learning Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Science Practitioner, you will be engaged in formulating, designing, and delivering AI and machine learning-based decision-making frameworks and models that drive business outcomes. Your typical day will involve collaborating with various teams to measure and justify the value of AI and machine learning solutions, ensuring that they align with organizational goals and deliver tangible results. You will also be responsible for analyzing complex data sets, deriving insights, and presenting findings to stakeholders to support informed decision-making processes. Roles & Responsibilities:- Expected to be a Subject Matter Expert with deep knowledge and experience.- Should have influencing and advisory skills.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate workshops and training sessions to enhance team capabilities in AI and machine learning.- Continuously evaluate and improve existing AI and machine learning models to ensure optimal performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Machine Learning.- Strong understanding of data preprocessing techniques and feature engineering.- Experience with various machine learning frameworks such as TensorFlow and PyTorch.- Ability to implement and optimize algorithms for predictive modeling.- Familiarity with cloud platforms for deploying machine learning models. Additional Information:- The candidate should have minimum 15 years of experience in Machine Learning.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Strong experience working with the Apache Spark framework, including a solid grasp of core concepts, performance optimizations, and industry best practices Proficient in PySpark with hands-on coding experience; familiarity with unit testing, object-oriented programming (OOP) principles, and software design patterns Experience with code deployment and associated processes Proven ability to write complex SQL queries to extract business-critical insights Hands-on experience in streaming data processing Familiarity with machine learning concepts is an added advantage Experience with NoSQL databases Good understanding of Test-Driven Development (TDD) methodologies Demonstrated flexibility and eagerness to learn new technologies Roles and Responsibilities Design and implement solutions for problems arising out of large-scale data processing Attend/drive various architectural, design and status calls with multiple stakeholders Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support Design, build & maintain efficient, reusable & reliable code Test implementation, troubleshoot & correct problems Capable of working as an individual contributor and within team too Ensure high quality software development with complete documentation and traceability Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)

Posted 1 week ago

Apply

3.0 - 8.0 years

2 - 5 Lacs

Bengaluru

Work from Office

Project Role : Quality Engineer (Tester) Project Role Description : Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process. Must have skills : Data Warehouse ETL Testing Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL)Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Quality Engineer, you will enable full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Your typical day will involve performing continuous testing for security, API, and regression suites, creating automation strategies, and supporting data and environment configuration. You will also participate in code reviews, monitor, and report defects, contributing to continuous improvement activities for the end-to-end testing process. Your role will be pivotal in ensuring that the quality standards are met throughout the development lifecycle, collaborating closely with various teams to achieve optimal results. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the development and execution of test plans and test cases to ensure comprehensive coverage.- Collaborate with cross-functional teams to identify and resolve quality issues in a timely manner. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Warehouse ETL Testing.- Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL).- Strong understanding of data integration processes and methodologies.- Experience with automated testing tools and frameworks.- Familiarity with performance testing and monitoring tools. Additional Information:- The candidate should have minimum 3 years of experience in Data Warehouse ETL Testing.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions with your team, providing guidance and support to ensure successful project outcomes. Your role will require you to stay updated with the latest technologies and methodologies to enhance application performance and user experience. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate communication between technical teams and stakeholders to ensure alignment on project goals.- Mentor junior team members, providing them with guidance and support in their professional development. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Strong understanding of data processing frameworks and distributed computing.- Experience with data integration and ETL processes.- Familiarity with cloud platforms and services related to application development.- Ability to troubleshoot and optimize application performance. Additional Information:- The candidate should have minimum 3 years of experience in PySpark.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data is accessible, reliable, and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture and data models.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and ETL processes.- Experience with data quality frameworks and data governance practices.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Kolhapur

Work from Office

Job Overview We are looking for talented Machine Learning Engineers to join us and be part of this journey. You will work closely with other Engineers, Product Managers, and underwriters to develop, improve, and deploy machine learning models and to solve other optimization problems. We make extensive use of machine learning in our credit product, where it is used (among other things) for underwriting and loan servicing decisions. We are also actively exploring other applications of Machine Learning in some of our newer products, with the ultimate goal of improving the user experience.Machine Learning sits at the intersection of a number of different disciplines: Computer Science, Statistics, Operations Research, Data Science, and others. At Branch, we fundamentally believe that in order for Machine Learning to be impactful, it needs to be closely embedded into the rest of the product development and software engineering process, which is why we emphasize the importance of software engineering skills and experience for this role.As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making. We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting.ResponsibilitiesCredit Decisions: Core to our business is understanding and building signals from unstructured and structured data to identify good borrowers.Customer Service: Using machine learning and LLM/NLP, automate customer service interactions and provide context to our customer service team.Fraud Prevention: Identify patterns of fraudulent behavior and build models to detect and prevent these behaviors.Team work: Bring your experience to bear on the technical direction and abilities of the team, and work cross-functionally with policy and product teams as we improve processes and break new ground. Qualifications2+ years of hands-on experience building software in a production environment. Startup or early-stage team experience is preferred.Excellent software engineering and programming skills, especially Python and SQL.A diverse range of data skills, including experimentation, statistics, and machine learning, and have used these skills to inform business decisions.A deep understanding of using cloud computing infrastructure and data pipelines in production.Self motivation: You teach yourself new skills. You take the initiative to solve problems before they arise. You roll up your sleeves and get stuff done.Team motivation: You listen to others, speak your mind, and ask the right questions. You are a great collaborator and teacher.The drive to make a positive impact on customers' lives.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Jharkhand

Work from Office

Job Overview We are looking for talented Machine Learning Engineers to join us and be part of this journey. You will work closely with other Engineers, Product Managers, and underwriters to develop, improve, and deploy machine learning models and to solve other optimization problems. We make extensive use of machine learning in our credit product, where it is used (among other things) for underwriting and loan servicing decisions. We are also actively exploring other applications of Machine Learning in some of our newer products, with the ultimate goal of improving the user experience.Machine Learning sits at the intersection of a number of different disciplines: Computer Science, Statistics, Operations Research, Data Science, and others. At Branch, we fundamentally believe that in order for Machine Learning to be impactful, it needs to be closely embedded into the rest of the product development and software engineering process, which is why we emphasize the importance of software engineering skills and experience for this role.As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making. We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting. ResponsibilitiesCredit Decisions: Core to our business is understanding and building signals from unstructured and structured data to identify good borrowers.Customer Service: Using machine learning and LLM/NLP, automate customer service interactions and provide context to our customer service team.Fraud Prevention: Identify patterns of fraudulent behavior and build models to detect and prevent these behaviors.Team work: Bring your experience to bear on the technical direction and abilities of the team, and work cross-functionally with policy and product teams as we improve processes and break new ground. Qualifications2+ years of hands-on experience building software in a production environment. Startup or early-stage team experience is preferred.Excellent software engineering and programming skills, especially Python and SQL.A diverse range of data skills, including experimentation, statistics, and machine learning, and have used these skills to inform business decisions.A deep understanding of using cloud computing infrastructure and data pipelines in production.Self motivation: You teach yourself new skills. You take the initiative to solve problems before they arise. You roll up your sleeves and get stuff done.Team motivation: You listen to others, speak your mind, and ask the right questions. You are a great collaborator and teacher.The drive to make a positive impact on customers' lives.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies