Jobs
Interviews

178 Jupyter Notebook Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 8.0 years

8 - 12 Lacs

hyderabad

Hybrid

Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop Mandatory Key Skillsinformatica,jupyter notebook,api integration,unix,linux,git,aws s3,hive,cloudera,jasper,airflow,hadoop,data modeling,PySpark*

Posted 11 hours ago

Apply

5.0 - 8.0 years

25 - 40 Lacs

bengaluru

Work from Office

Job Summary We are looking for a talented Data Scientist to join our team. The ideal candidate will have a strong foundation in data analysis, statistical models, and machine learning algorithms. You will work closely with the team to solve complex problems and drive business decisions using data. This role requires strategic thinking, problem-solving skills, and a passion for data. Job Responsibilities Analyse large, complex datasets to extract insights and determine appropriate techniques to use. Build predictive models, machine learning algorithms and conduct A/B tests to assess the effectiveness of models. Present information using data visualization techniques. Collaborate with different teams (e.g., product development, marketing) and stakeholders to understand business needs and devise possible solutions. Stay updated with the latest technology trends in data science. Develop and implement real-time machine learning models for various projects. Engage with clients and consultants to gather and understand project requirements and expectations. Write well-structured, detailed, and compute-efficient code in Python to facilitate data analysis and model development. Utilize IDEs such as Jupyter Notebook, Spyder, and PyCharm for coding and model development. Apply agile methodology in project execution, participating in sprints, stand-ups, and retrospectives to enhance team collaboration and efficiency. Education IC - Typically requires a minimum of 5 years of related experience.Mgr & Exec - Typically requires a minimum of 3 years of related experience.

Posted 12 hours ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

bengaluru

Hybrid

Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop Mandatory Key SkillsApache Spark,Python,unix,linux,performance tuning,agile methodologies,hadoop,etl,PySpark*

Posted 13 hours ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

chennai

Hybrid

Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci cd , zeppelin , pycharm , pyspark , etl tools,control-m,unit test cases,tableau,performance tuning , jenkins , qlikview , informatica , jupyter notebook,api integration,unix/linux,git,aws s3 , hive , cloudera , jasper , airflow , cdc , pyspark , apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop Mandatory Key Skills ci cd,zeppelin,pycharm,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix,linux,PySpark*

Posted 13 hours ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

kolkata

Work from Office

Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop Mandatory Key Skills ci/cd,zeppelin,pycharm,etl,control-m,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix,PySpark*

Posted 14 hours ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Job Description: As a Big Data Engineer with Capco, you will play a crucial role in designing and implementing innovative solutions to help clients transform their business. Your key responsibilities will include: - Strong skills in messaging technologies like Apache Kafka or equivalent, programming in Scala, Spark with optimization techniques, and Python. - Ability to write queries through Jupyter Notebook and work with orchestration tools like NiFi and AirFlow. - Design and implement intuitive, responsive UIs to enhance data understanding and analytics for issuers. - Experience with SQL and distributed systems, along with a strong understanding of Cloud architecture. - Ensuring a high-quality code base through the writing and reviewing of performance-oriented, well-tested code. - Demonstrated experience in building complex products, with knowledge of Splunk or other alerting and monitoring solutions. - Proficiency in using Git, Jenkins, and a broad understanding of Software Engineering Concepts and Methodologies. Joining Capco will provide you with the opportunity to make a significant impact through innovative thinking, delivery excellence, and thought leadership. You will be part of a diverse and inclusive culture that values creativity and offers career advancement without forced hierarchies. At Capco, we believe in the competitive advantage of diversity of people and perspectives.,

Posted 3 days ago

Apply

6.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Data Science and AI / ML professional at LTIMindtree, you will play a crucial role in leveraging digital technologies to drive innovation and growth for our clients. With over 6 years of experience in the field, you will be expected to have a deep understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Your expertise in data frameworks like Hadoop, business intelligence tools like Tableau, and Cloud native skills will be essential in delivering superior business outcomes. Key Responsibilities: - Utilize your knowledge of SQL and Python, with familiarity in Scala, Java, or C++ as an asset, to develop innovative AI solutions - Apply your analytical mind and strong math skills in statistics and algebra to solve complex business challenges - Work with common data science toolkits such as TensorFlow, KERAs, PyTorch, PANDAs, Microsoft CNTK, NumPy, etc., with expertise in at least one being highly desirable - Leverage your experience in NLP, NLG, and Large Language Models like BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc., to drive transformative projects - Communicate effectively and present your findings with great clarity, thriving in a fast-paced team environment - Utilize AIML and Big Data technologies like AWS SageMaker, Azure Cognitive Services, Google Colab, Jupyter Notebook, Hadoop, PySpark, HIVE, AWS EMR, etc., to scale your solutions - Apply your expertise in NoSQL databases such as MongoDB, Cassandra, HBase, Vector databases to handle large datasets effectively - Demonstrate a good understanding of applied statistics skills like distributions, statistical testing, regression, etc., to derive valuable insights for the business Qualifications Required: - More than 6 years of experience in Data Science and AI / ML domain - Excellent understanding of machine learning techniques and algorithms - Experience using business intelligence tools and data frameworks - Knowledge of SQL and Python; familiarity with Scala, Java or C++ is an asset - Strong math skills (e.g., statistics, algebra) - Experience with common data science toolkits - Great communication and presentation skills - Experience with AIML and Big Data technologies - Experience with NoSQL databases Please note that the mentioned qualifications, skills, and experiences are mandatory for the role at LTIMindtree.,

Posted 3 days ago

Apply

5.0 - 8.0 years

11 - 15 Lacs

hyderabad

Work from Office

Full Stack Lead Developer Role description Ecolab is looking for an experienced full stack lead developer to be part of a dynamic team that's at the forefront of technological innovation. We're leveraging cutting-edge AI to create novel solutions that optimize operations for our clients, particularly within the restaurant industry. Our work is transforming how restaurants operate, making them more efficient and sustainable. As a key player in our new division, you'll have the unique opportunity to shape its culture and direction. Your contributions will directly impact the success of our innovative projects and help define the future of our product offerings. Additionally, you will experience the best of both worlds with this team at Ecolab: the agility and creativity of a startup paired with the stability and resources of a global leader. Our collaborative environment fosters innovation while providing the support and security you need to thrive. Responsibilities Develop, implement, and maintain scalable and high-performance applications using .NET Core or Python Design and maintain end-to-end solutions with any cloud provider such as AWS or Azure services, including Azure Data Factory, Azure Storage (Blob Storage), and Azure SQL/No-SQL Databases Design, build, and manage database systems, utilizing Python for data manipulation and processing, particularly Pandas and Numpy for advanced data analytics and scientific computing tasks Utilize expertise in front-end technologies such as SCSS, CSS, React, Streamlit or Flask, to create responsive and user-friendly web interfaces Collaborate with cross-functional teams to gather and analyze system requirements and translate them into technical specifications for new application features and enhancements Contribute to architectural and technical decisions and provide expertise in code reviews to ensure high code quality and adherence to best practices Ensure the quality and performance of applications by implementing version control (Git) and continuous integration/continuous deployment (CI/CD) practices Contribute to the creation of new solutions and troubleshoot / optimize existing solutions to improve performance and reliability Minimum technical qualifications Bachelor's in computer science, Engineering, or related field with 5-8 years of full stack experience, OR 5+ years of relevant experience in full stack development Solid programming skills in .Net Core or Python, React and SQL relational / No-SQL document databases Proficiency in utilizing various IDEs such as Jupyter notebooks or Visual Studio code Experience in any cloud provider AWS or Azure Services such as Azure Data Factory, Azure SQL Database, Cosmos DB, Azure DevOps and Azure Data Directory Familiarity with version control systems (Git) and CI/CD practices Strong problem-solving skills and attention to detail Excellent communication and teamwork abilities Ability to adapt to changing priorities and manage multiple tasks effectively Preferred skills / interests Previous experience with early-stage product development Proven track record of deploying products in dynamic environments Interest in collaborating with partners outside of core team / organization (including SMEs in computer Vision AI) Ability to wear multiple hats and plug into different roles as product develops Desire to be in a fast-moving, agile environment with willingness to adjust quickly Openness to experimental approaches typical of tech start-ups Willingness to learn new skills and technical languages as needed

Posted 4 days ago

Apply

2.0 - 4.0 years

8 - 12 Lacs

bengaluru

Work from Office

Principle Developer - ML/Prompt Engineer Technologies: Amazon Bedrock, RAG Models, Java, Python, C or C++, AWS Lambda, Responsibilities: Responsible for developing, deploying, and maintaining a Retrieval Augmented Generation (RAG) model in Amazon Bedrock, our cloud-based platform for building and scaling generative AI applications. Design and implement a RAG model that can generate natural language responses, commands, and actions based on user queries and context, using the Anthropic Claude model as the backbone. Integrate the RAG model with Amazon Bedrock, our platform that offers a choice of high-performing foundation models from leading AI companies and Amazon via a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Optimize the RAG model for performance, scalability, and reliability, using best practices and robust engineering methodologies. Design, test, and optimize prompts to improve performance, accuracy, and alignment of large language models across diverse use cases. Develop and maintain reusable prompt templates, chains, and libraries to support scalable and consistent GenAI applications. Skills/Qualifications: Experience in programming with at least one software language, such as Java, Python, or C/C++. Experience in working with generative AI tools, models, and frameworks, such as Anthropic, OpenAI, Hugging Face, TensorFlow, PyTorch, or Jupyter. Experience in working with RAG models or similar architectures, such as RAG, Ragna, or Pinecone. Experience in working with Amazon Bedrock or similar platforms, such as AWS Lambda, Amazon SageMaker, or Amazon Comprehend. Ability to design, iterate, and optimize prompts for various LLM use cases (e.g., summarization, classification, translation, Q&A, and agent workflows). Deep understanding of prompt engineering techniques (zero-shot, few-shot, chain-of-thought, etc.) and their effect on model behavior. Familiarity with prompt evaluation strategies, including manual review, automatic metrics, and A/B testing frameworks. Experience building prompt libraries, reusable templates, and structured prompt workflows for scalable GenAI applications. Ability to debug and refine prompts to improve accuracy, safety, and alignment with business objectives. Awareness of prompt injection risks and experience implementing mitigation strategies. Familiarity with prompt tuning, parameter-efficient fine-tuning (PEFT), and prompt chaining methods. Familiarity with continuous deployment and DevOps tools preferred. Experience with Git preferred Experience working in agile/scrum environments Successful track record interfacing and communicating effectively across cross-functional teams. Good communication, analytical and presentation skills, problem-solving skills and learning attitude Mandatory Key SkillsTensorflow,Pytorch,Java,Agile,Scrum,Devops,Ci/Cd,Jupyter Notebook,Aws Lambda,C++,Git,Aws Sagemaker,Devops Tools,Ml,Aws,Machine Learning,Deep Learning,Natural Language Processing,Artificial Intelligence,Neural Networks,Data Science,Keras,Python*

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Description Key Responsibilities: Analyse Data Requirements in Sales Cycles: Conduct high-level, rapid assessments or in-depth evaluations for proof-of-concepts(POCs) to identify client data needs early. Serve as the teams data expert, providing actionable advice on feasibility, integration, and optimization, including identifying pattern such as customer churn through data modelling to improve targeting strategies. Develop Implementation Specifications: Collaborate with the technical product manager to define detailed data-related specs, ensuring seamless handoff to the implementation team, with a focus on designing and maintaining ETL pipelines that integrate data from various systems like CRM, billing, and operations. Conduct Data-Focused Testing and QA: Validate implementations through rigorous testing, ensuring data accuracy, integrity, and performance meet enterprise standards, including processing and cleaning large datasets to improve reporting accuracy and system responsiveness. Drive Proactive Data Innovation: Monitor and innovate on data initiatives across RD teams, identifying opportunities for enhancement. Prepare concise summaries and recommendations for the Director of Enterprise Solution Engineering, such as developing automated reporting solutions that reduce manual efforts by significant margins. Data Visualization and Reporting: Create dashboards and reports to support sales demos, client presentations, and internal decision-making, including interactive BI dashboards for tracking trends like customer behaviour, usage, and service levels using tools like Tableau. Client Collaboration and Training: Work directly with clients to refine data requirements and train internal teams on data tools, best practices, and compliance (e.g., GDPR, data security), while engaging stakeholders to define KPIs and deliver tailored visual analytics solutions. Process Optimization: Identify inefficiencies in data workflows, recommend automation or integration strategies, and track metrics to measure solution impact post implementation, such as building centralized dashboards that streamline operational efficiency and resource utilization. Cross-Functional Liaison: Act as a bridge between sales, engineering, and product teams to align on data strategy, including forecasting trends based on industry data benchmarks, and supporting root cause analysis of operational issues with preventive measures. Required Skills and Qualifications: Minimum of 3 years of professional experience in data analysis or related fields, with at least 1 year in designing ETL pipelines, developing BI dashboards, or delivering data-driven insights in enterprise settings. Bachelors degree in a relevant field such as Computer Science, Data Science, Statistics, or Engineering required; Masters degree in Data Analytics or a related discipline preferred. Proficiency in data analysis tools such as SQL, Python (including libraries like Pandas, NumPy, Matplotlib, and Seaborn), ETL processes, and visualization platforms (e.g., Tableau, Jupyter Notebook). Strong understanding of data governance, security, and compliance in enterprise environments, with experience in data pre-processing, feature engineering, and model evaluation Experience in media intelligence or similar data-intensive industries preferred, including healthcare, energy, or enterprise technology domains, with hands-on work in predictive modelling, time series forecasting, and machine learning-driven dashboards. Excellent communication skills for collaborating with technical and non-technical stakeholders, including stakeholder engagement and delivering data-driven decision making insights. Ability to thrive in a fast-paced, innovative setting with a proactive mindset, demonstrated through problem-solving in application support, root cause analysis, and developing automated solutions. This role will report directly to the Director of Enterprise Solution Engineering and contribute to Meltwaters mission of delivering cutting-edge data solutions. What We Offer: Enjoy flexible paid time off options for enhanced work-life balance. Comprehensive health insurance tailored for you. Employee assistance programs cover mental health, legal, financial, wellness, and behaviour areas to ensure your overall well-being. Complimentary Calm App subscription for you and your loved ones, because mental wellness matters Energetic work environment with a hybrid work style, providing the balance you need Benefit from our family leave program, which grows with your tenure at Meltwater Thrive within our inclusive community and seize ongoing professional development opportunities to elevate your career Our Story At Meltwater, we believe that when you have the right people in the right environment, great things happen. Our best-in-class technology empowers our 27,000 customers around the world to make better business decisions through data. But we cant do that without our global team of developers, innovators, problem-solvers, and high-performers who embrace challenges and find new solutions for our customers. Our award-winning global culture drives everything we do and creates an environment where our employees can make an impact, learn every day, feel a sense of belonging, and celebrate each others successes along the way. We are innovators at the core who see the potential in people, ideas and technologies. Together, we challenge ourselves to go big, be bold, and build best-in-class solutions for our customers. Were proud of our diverse team of 2,200+ employees in 50 locations across 25 countries around the world. No matter where you are, youll work with people who care about your success and get the support you need to unlock new heights in your career. We are Meltwater. We love working here, and we think you will too. "Inspired by innovation, powered by people." Equal Employment Opportunity Statement Meltwater is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: At Meltwater, we are dedicated to fostering an inclusive and diverse workplace where every employee feels valued, respected, and empowered. We are committed to the principle of equal employment opportunity and strive to provide a work environment that is free from discrimination and harassment. All employment decisions at Meltwater are made based on business needs, job requirements, and individual qualifications, without regard to race, colour, religion or belief, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, veteran status, or any other status protected by the applicable laws and regulations. Meltwater does not tolerate discrimination or harassment of any kind, and we actively promote a culture of respect, fairness, and inclusivity. We encourage applicants of all backgrounds, experiences, and abilities to apply and join us in our mission to drive innovation and make a positive impact in the world. Show more Show less

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

**Role Overview:** As a Data Scientist specializing in fraud detection, your role will involve conducting analysis and research to identify fraud patterns and trends, focusing on new account fraud and account takeovers. You will integrate and evaluate third-party data sources to enhance existing fraud detection solutions and perform retrospective analyses to demonstrate the value of these solutions to potential clients. Additionally, you will collaborate with senior data scientists on predictive model development, conduct quality assurance testing of machine learning algorithms, and validate model performance across various data scenarios. **Key Responsibilities:** - Conduct ad hoc exploratory data analysis to identify fraud patterns and trends - Integrate and evaluate third-party data sources to enhance fraud detection solutions - Perform retrospective analyses and back-testing to demonstrate solution value - Execute statistical studies to support product development decisions - Collaborate on predictive model development and implementation - Conduct quality assurance testing of machine learning algorithms and data attributes - Validate model performance and stability across different data scenarios - Define and deliver analytical reports translating complex findings into actionable insights - Present results to technical and non-technical stakeholders - Support client demonstrations and proof-of-concept initiatives - Document analytical methodologies and findings **Qualification Required:** - Bachelor's or advanced degree in Statistics, Mathematics, Computer Science, Data Science, or related quantitative field - 3-5 years of experience in data science, analytics, or a related analytical role - Proficiency in Python, Jupyter Notebook, and statistical analysis - Strong SQL skills for data extraction and manipulation - Proficiency in Microsoft Excel and PowerPoint - Solid understanding of statistical modeling and machine learning concepts - Experience with data exploration and pattern recognition - Strong problem-solving and critical thinking abilities - Ability to explain complex analytical concepts to non-technical audiences - Immediate joining within 2 weeks preferred - Must hold a valid passport and B1 visa is an added advantage Please note that the work location for this position is in person.,

Posted 5 days ago

Apply

15.0 - 25.0 years

5 - 9 Lacs

pune

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Key Responsibilitiesa. Design, configure and build applications to meet business process and application requirements b. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers, Risk identification in your moduled. Knowledge on Analyzing Issues and working on bug fixes Technical Experienceb. Should have knowledge on Standard Modules used in RICEFW development for Billing Objects c. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have sound knowledge on Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWBAdditional infoa. Good Communication Skillb. Good interpersonal skill.c. A minimum of 15 years of full-time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 25.0 years

5 - 9 Lacs

pune

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP for Utilities Billing Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Key Responsibilitiesa. Design, configure and build applications to meet business process and application requirements b. Knowledge in Analyzing requirements and enhancing and building highly optimized standard / custom applications as well as creating Business Process and related technical documentationc. Billing Execution Individual and batch for Daily reporting to Managers, Risk identification in your moduled. Knowledge on Analyzing Issues and working on bug fixes Technical Experiencea. Should have hands on knowledge of implementing Billing related enhancements, FQ events b. Should have knowledge on Standard Modules used in RICEFW development for Billing Objects c. Should have good knowledge on all Billing and Invoicing processes like Meter to Cash Cycle, Billing exceptions and reversals, Joint Invoicing, Bill Printing, Collective invoicing and Advance Billing functions like Real Time Pricing and Budget Billingd. Should have sound knowledge on Billing Mater Data and Integration points with Device Management and FICAe. Should have strong De-bugging skills , PWBAdditional infoa. Good Communication Skillb. Good interpersonal skill.c. A minimum of 15 years of full-time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 5.0 years

11 - 15 Lacs

bengaluru

Work from Office

Job Title : Senior AI Engineer - Developer Experience Required : 3-5 years About Happiest Minds: Happiest Minds is a long-standing leader in providing digital transformation, cloud, AI, and IoT services. We are on a mission to deliver happiness to our customers through innovative technology solutions and a positive work environment. Job Description: Happiest Minds is seeking a passionate and skilled **Senior AI Engineer - Developer** to join our dynamic team. The ideal candidate will have a strong background in artificial intelligence and software development, with a keen understanding of how to leverage modern technologies to solve complex problems. Key Responsibilities: - Design, develop, and implement AI models and algorithms to enhance existing solutions and create new AI-driven applications. - Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. - Utilize Flask to develop scalable and maintainable web services that integrate with AI models. - Perform data analysis and manipulation using NumPy and Pandas to facilitate data-driven decision-making. - Create and maintain Jupyter Notebooks for experimentation, prototyping, and data visualization. - Manage NoSQL databases, specifically MongoDB and Oracle NoSQL, ensuring data storage solutions are optimized for performance and reliability. - Conduct code reviews and share best practices with junior team members to promote knowledge sharing and skill enhancement. - Stay updated with the latest trends, technologies, and advancements in AI and software development. Required Skills and Qualifications: - 3-5 years of professional experience in development and software engineering. - Proficient in programming with Flask for web application development. - Strong experience with data manipulation and analysis using NumPy and Pandas. - Hands-on experience with Jupyter Notebook for data exploration and sharing insights. - Familiarity with database management systems, specifically MongoDB and Oracle NoSQL. - Excellent problem-solving skills and a passion for data-driven decision-making. - Strong communication and teamwork skills to collaborate effectively within teams. Preferred Qualifications: - Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. - Exposure to machine learning frameworks and libraries (e.g., TensorFlow, PyTorch) will be an advantage. - Ability to work independently, manage multiple tasks, and meet deadlines efficiently. What We Offer: - Competitive salary and benefits package. - Opportunities for professional development and continuous learning. - A collaborative and innovative work environment where your contributions are valued. - The chance to work on cutting-edge technologies and impactful projects.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

bengaluru

Work from Office

Job Posting TitleBUSINESS INTELLIGENCE ANALYST I Band/Level5-4-S Education ExperienceBachelors Degree (High School +4 years) Employment ExperienceLess than 1 year At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview TE Connectivity s Business Intelligence Teams are responsible for the processing, mining and delivery of data to their customer community through repositories, tools and services. Roles & Responsibilities Tasks & Responsibilities Assist in the development and deployment of Digital Factory solutions and Machine Learning models across Manufacturing, Quality, and Supply Chain functions. Support data collection, cleaning, preparation, and transformation from multiple sources, ensuring data consistency and readiness. Contribute to the creation of dashboards and reports using tools such as Power BI or Tableau. Work on basic analytics and visualization tasks to derive insights and identify improvement areas. Assist in maintaining existing ML models, including data monitoring and model retraining processes. Participate in small-scale PoCs (proof of concepts) and pilot projects with senior team members. Document use cases, write clean code with guidance, and contribute to knowledge-sharing sessions Support integration of models into production environments and perform basic testing. Desired Candidate Proficiency in Python and/or R for data analysis, along with libraries like Pandas, NumPy, Matplotlib, Seaborn. Basic understanding of statistical concepts such as distributions, correlation, regression, and hypothesis testing. Familiarity with SQL or other database querying tools; e.g., pyodbc, sqlite3, PostgreSQL. Exposure to ML algorithms like linear/logistic regression, decision trees, k-NN, or SVM. Basic knowledge of Jupyter Notebooks and version control using Git/GitHub. Good communication skills in English (written and verbal), able to explain technical topics simply Collaborative, eager to learn, and adaptable in a fast-paced and multicultural environment. Exposure to or interest in manufacturing technologies (e.g., stamping, molding, assembly). Exposure to cloud platforms (AWS/Azure) or services like S3, SageMaker, Redshift is an advantage. Hands-on experience in image data preprocessing (resizing, Gaussian blur, PCA) or computer vision projects. Interest in AutoML tools and transfer learning techniques. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more at www.te.com and on LinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site. Location

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 10 Lacs

gurugram

Work from Office

About the Role: Grade Level (for internal use): 10 The Team: Join the TeraHelix team within S&P Globals Enterprise Data Organisation (EDO). We are a dynamic group of highly skilled engineers dedicated to building innovative data solutions that empower businesses. Our team works collaboratively on foundational data products, leveraging cutting-edge technologies to solve real-world client challenges. The Impact: As part of the TeraHelix team, you will contribute to the development of our marquee AI-enabled data products, including TeraHelix's GearBox, ETL Mapper and Data Studio solutions. Your work will directly impact our clients by enhancing their data capabilities and driving significant business value. Whats in it for you: Opportunity to work on a distributed, cloud-native, fully Java tech stack (Java 21+) with UI components built in the Vaadin framework. Engage in skill-building and innovation opportunities in a supportive environment. Collaborate with a diverse group of professionals across data, product, and technology disciplines. Contribute to projects that have a tangible impact on the organisation and the industry. Key Responsibilities: Design, develop and maintain robust data pipelines to support data ingestion, transformation and storage. Write efficient SQL queries for data extraction, manipulation and analysis. Utilise Apache Spark & Python for data processing, automation and integration with various data sources. Collaborate with data scientists and stakeholders to understand data requirements and deliver actionable insights. Implement data quality checks and validation processes to ensure data accuracy and reliability. Analyse large datasets to identify trends, patterns and anomalies that inform business decisions. Create and maintain documentation for data processes, workflows and architecture. Stay updated on industry best practices and emerging technologies in data engineering and analysis. Provide support using data visualisation tools to help stakeholders interpret data effectively. What were looking for: Bachelors degree or higher in Computer Science or a related field. Strong experience in SQL for data manipulation and analysis. Proficiency in Spark (Java, SQL or PySpark) and Python for data processing and automation tasks. Solid understanding of data engineering principles and best practices. Experience with data analytics and the ability to derive insights from complex datasets. Familiarity with big data technologies (e.g. Hadoop, Spark) and cloud data platforms (e.g. AWS, Azure, GCP). Familiarity with data visualisation tools (e.g. Power BI, Tableau, Qlik) and Data Science Notebooks (e.g. Jupyter, Apache Zeppelin) to present findings effectively. Knowledge of financial or capital markets to understand business domain requirements. Excellent problem-solving skills and attention to detail. Strong communication skills for collaboration with cross-functional teams. Nice to have: Experience with Java for data processing or integration tasks. Knowledge of ETL (Extract, Transform, Load) processes and tools. Understanding of data warehousing concepts and architecture. Experience with version control systems (e.g. Git, GitHub, Bitbucket, Azure DevOps). Interest in machine learning and data science concepts. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

7.0 - 12.0 years

10 - 15 Lacs

bengaluru

Work from Office

We are seeking a highly skilled Senior Lead Data Engineer to join our R&D Data Engineering Team. In this role, you will be a key player in shaping the architecture and technical direction of our data platform, ensuring that it meets the evolving needs of the business while adhering to best practices and industry standards. If you're looking for an opportunity to combine your technical skills with strategic thinking and make a real difference, we want to hear from you. About You experience, education, skills, and accomplishments: Bachelors Degree or equivalent At least 7 years of Relevant Experience At least 5+ Years in Software Development: Demonstrated experience in software development, with a focus on Big Data technologies. At least 3+ Years in Distributed Data Processing: Proven experience in building scalable distributed data processing solutions. At least 3+ Years in Database Design: Expertise in database design and development, with a strong focus on data model design. Strong ProficiencywithApacheSparkandAirflow: Extensive hands-on experience with these technologies, leveraging them for data processing and orchestration. Python Proficiency: Advanced proficiency in Python for data processing and building services. ExperiencewithDatabricksandSnowflake: Practical experience with these platforms, including their use in cloud-based data pipelines. FamiliaritywithDeltaLakeorApache Iceberg: Experience working with these data storage to decouple storage from processing engines. Cloud-Based Solutions Expertise: Proven experience in designing and implementing cloud-based data pipelines, with specific expertise in AWS services such asS3,RDS,EMR, andAWS Glue. CI/CD Best Practices: Strong understanding and application of CI/CD principles It would be great if you also had: Knowledge of Additional Technologies: Familiarity withCassandra,Hadoop,Apache Hive,Jupyter notebooks, BI tools:TableauandPower BI. ExperiencewithPL/SQLandOracle GoldenGate: Additional experience in these areas is advantageous Knowledge on any of these technologies/tools: Cassandra, Hadoop, Apache Hive, Snowflake, Jupiter notebook, Databricks stack, AWS services, EC2, ECS, RDS, EMR, S3, AWS Glue, Airflow What will you be doing in this role? Provide Technical Leadership: Offer strategic guidance on technology choices, comparing different solutions to meet business requirements while considering cost control and performance optimization. Communicate Effectively: Exhibit excellent communication skills, with the ability to clearly articulate complex technical concepts to both technical and non-technical stakeholders. Design and Maintain Data Solutions: Develop and maintain the overall solution architecture for the Data Platform, demonstrating deep expertise in integration architecture and design across multiple platforms at an enterprise scale. Enforce Best Practices: Implement and enforce best practices in Big Data management, from software selection to architecture design and implementation processes. Drive Continuous Improvement: Contribute to the continuous enhancement of support and delivery functions by staying informed about technology trends and making recommendations for improving application services. Lead Technical Investigations: Conduct technical investigations and proofs of concept, both individually and as part of a team, including hands-on coding to provide technical recommendations. Knowledge Sharing: Actively spread knowledge and best practices within the team, fostering a culture of continuous learning and improvement About the Team We are team located in India, US, and Europe. Hours of Work Regular working timing in India.

Posted 1 week ago

Apply

3.0 - 4.0 years

11 - 15 Lacs

mumbai

Work from Office

Optum's Applied AI team is seeking a detail-oriented and proactive Senior Data Scientist (Core in Data Analysis) with minimum 3-4 years of industry experience to support the development and maintenance of data pipelines that fuel AI/ML initiatives. You will work closely with data engineers and other data scientists to enable large scale data analysis of prior ML inferred data - structured and unstructured clinical datasets. This role blends hands-on data wrangling, transformation logic, and insight generation in a highly collaborative environment. Primary Responsibilities: Collaborate with cross-functional teams - including ML engineers, annotators, and clinical domain experts - to translate business challenges into deployable AI solutions Implement automated data labeling pipelines using techniques like active learning, weak supervision, and human-in-the-loop systems Support the design, development, and maintenance of scalable data pipelines for AI/ML workflows Perform exploratory data analysis (EDA), profiling, and validation on healthcare data to ensure readiness for downstream ML tasks Partner with data scientists to prepare datasets for model training, evaluation, and monitoring Ensure data quality, consistency, and documentation across structured (e.g., EHRs) and unstructured (e.g., scanned PDFs) sources Integrate and monitor data workflows using orchestration tools (e.g., Airflow, Step Functions) Build dashboards or reports to communicate insights, trends, or pipeline health as needed Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor's degree in computer science or adjacent field Advanced degree in a field that emphasizes the use of data science/statistics techniques (e.g., Computer Science, Applied Mathematics, or a field with direct NLP application) 4+ years of experience in Data Science (Core in Data Analysis) to support the development and maintenance of data pipelines that fuel AI/ML initiatives Solid experience in Ms Excel and Version Control using GIT Proficiency in Python (Advanced), SQL(Advanced). Experience in tools like Airflow, Jupyter notebook Cloud Exposure: Basic familiarity with AWS ecosystem Visualization Tools: Power BI, Tableau, or Plotly for dashboarding and reporting Data Quality Monitoring: Experience with tools or techniques for detecting data drift or label inconsistencies Healthcare/NLP Domain Knowledge: Prior work with clinical documents, EMR data, or coding workflows Proven excellent Communication Skills Proven flexibility to provide support during critical business periods Proven ability to interpret and present complex data in various formats Proven positive team player with a drive to learn and contribute to achieving results Willingness to work in varying shifts

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Threat Intelligence Analyst at Fortinet, you will be an integral part of the Cyber Threat Intelligence (CTI) Collections/Analysis team. This team comprises highly skilled analysts, researchers, and specialists dedicated to safeguarding customers and their assets from external threats. Leveraging our proprietary hybrid intelligence platforms and methodologies, you will utilize your exceptional writing and editing skills to generate actionable intelligence for our customer base. Your responsibilities will involve assessing current and emerging threats related to cybercrime and various forms of malicious exploitation. Your role will encompass the following key responsibilities: - Serve as the CTI Product Subject Matter Expert (SME) to address client requests, respond to incidents, and manage escalations effectively. - Collaborate with customers to comprehend their unique threat landscape and provide customized solutions accordingly. - Monitor and analyze cybersecurity events, incidents, and vulnerability reports from multiple sources to stay abreast of potential risks. - Review and interpret data sourced from various outlets such as OSINT, Darknet, and TECHINT. - Work closely with the Internal Research team to identify threats specific to individual customers. - Develop customer-specific analytical reports based on identified findings. - Produce regular Security Trend reports utilizing information from the Internal threat repository. - Monitor, analyze, and report on cybersecurity events, intrusion events, security incidents, and other potential threats while adhering to operational security best practices. We are seeking candidates with the following qualifications and attributes: - Experience in Managed Threat Intelligence services is essential. - Prior experience as an SME supporting Clients" CTI requirements is highly desirable. - Active presence on platforms like Medium for blog writing is a plus. - Strong foundational knowledge in Information Security. - Proficiency in Cyber Threat Intelligence concepts. - Ability to create high-quality Security Analysis reports. - Proficient in understanding and analyzing various threat vectors effectively. - Familiarity with cyber threats, malware, APTs, exploits, etc. - Knowledge of DarkNet, DeepWeb, open-source intelligence, social media, and other sources of cyber-criminal activities. - Strong interpersonal and English communication skills to effectively engage with clients and explain technical details. - Willingness to learn new technologies and skills, adapt to changes, and innovate. - Previous experience in Cyber Crime Research is advantageous. - Certifications such as CEH and other cybersecurity qualifications are beneficial but not mandatory. - Proficiency in programming/scripting languages, particularly Python and Jupyter Notebook, is an added advantage. - Ability to maintain the highest levels of discretion and confidentiality. Language Proficiency: Fluency in English is mandatory, while proficiency in Hindi or any other international language like Arabic, Russian, Japanese, Chinese, German, Italian is an additional asset. Desired Experience: 4-6 years Working Conditions: This position necessitates full-time office work; remote work options are not available. Company Culture: At Fortinet, we promote a culture of innovation, collaboration, and continuous learning. We are dedicated to fostering an inclusive environment where every employee is valued and respected. We encourage individuals from diverse backgrounds and identities to apply. Our Total Rewards package is competitive, supporting your overall well-being and offering flexible work arrangements in a supportive environment. If you are looking for a challenging, fulfilling, and rewarding career journey, we invite you to explore the opportunity of joining us to provide impactful solutions to our 660,000+ customers worldwide.,

Posted 1 week ago

Apply

0.0 - 2.0 years

4 - 8 Lacs

mumbai

Work from Office

SFA Analyst (Mumbai) About Us: Morningstar DBRS is a leading provider of independent rating services and opinions for corporate and sovereign entities, financial institutions, and structured finance instruments globally. Currently with 700 employees in eight offices globally. Formed through the July 2019 acquisition of DBRS by Morningstar, Inc., the ratings business is the fourth-largest provider of credit ratings in the world. Morningstar DBRS is committed to empowering investor success, serving the market through leading-edge technology and raising the bar for the industry. Morningstar DBRS is a market leader in Canada, the U.S. and Europe in multiple asset classes. Morningstar DBRS rates more than 4,000 issuers and 60,000 securities worldwide and is driven to bringing more clarity, diversity of opinion, and responsiveness to the ratings process. Morningstar DBRS approach and size provide the agility to respond to customers’ needs, while being large enough to provide the necessary expertise and resources. Visit: https://dbrs.morningstar.com/ to learn more. About the Role : Morningstar DBRS Structured Finance Analytics team is looking for candidate with good problem solving, analytical & technical mindset. As an Analyst, you will work with team to automate data analysis processes to include document downloads, data storage, build and run data analytics to aid rating, research, and surveillance process, develop and enhance data analysis and workflow optimization tools; assist with special projects/initiatives, as needed. Proficiency in Python, Tableau, SQL and VBA will be needed. This role will provide unique opportunities for mastering the key aspects of our business including in-depth collateral and deal analysis. This position is based in our Navi Mumbai office. Responsibilities: Work with Associate Quant Analyst to deliver projects and services. Assist the team with transforming, improving and integrating data, depending on the business requirements. Combining the data result sets across multiple sources Understand core concepts around data storage and access specifically in structured data systems such as databases (SQL, Athena, AWS S3) Participate actively in the design and build phases, to aim at producing high quality deliverables. Have a mindset to bring about process efficiencies and ideate automations Collect, organize, and study data from internal and external sources for use in criteria development, ratings, and research reports. Take ownership of the tasks with focus on quality and accuracy of the deliverables Demonstrate strong learning curve Highly organized and efficient, with ability to multi-task and meet tight deadlines Ensure compliance with regulatory and company policies and procedures Requirements: Bachelor’s degree in Engineering or other quantitative discipline, Finance or Management Studies. Masters, CFA or CFA program enrollment are a plus 1-2 years of experience working with financial products using Python. Proficiency in Python / Anaconda, Data science stack (Jupyter, Pandas, NumPy), Tableau, Microsoft Excel, Visual Basic for Applications (VBA) and MSSQL. Proficiency in object-oriented programming is a plus. Strong attention to detail and accuracy Highly motivated, self-starter who is keen to learn, has a positive attitude and a strong work ethic. Ability to manage multiple tasks at the same time and deliver results in a timely manner. Good inter-personal skills and ability to participate/ contribute as a team player. Morningstar DBRS is an equal opportunity employer.

Posted 1 week ago

Apply

2.0 - 5.0 years

9 - 13 Lacs

mumbai

Work from Office

The Group: Morningstar’s Quantitative Research Group creates independent investment research and data-driven analytics designed to help investors and Morningstar achieve better outcomes by making better decisions. We utilize statistical rigor and large data sets to inform the methodologies we develop. Our research encompasses hundreds of thousands of securities within a large breadth of asset classes including equities, fixed income, structured credit, and funds. Morningstar is one of the largest independent sources of fund, equity, and credit data and research in the world, and our advocacy for investors’ interests is the foundation of our company. The Role: In this dynamic, external-facing role, you will present Morningstar’s research, data, and products to external journalists and reporters, with a primary focus on leading financial media outlets. You will collaborate closely with the European Corporate Communications team to respond to media inquiries with timely, data-driven insights. Additionally, you will be responsible for producing original investment research that informs and empowers investors. This role is instrumental in enhancing Morningstar’s public profile and building lasting relationships with top-tier financial media. Location: Mumbai (Platinum Techno Park) Working Hours: UK Shift Responsibilities: Collaborate closely with Morningstar’s Corporate Communications, Fund Research and Equity Research teams to respond to media requests with timely, data-driven analysis. Accurately represent the work of the quantitative research team to both internal stakeholders and external clients. Develop deep expertise in key investment topics including equities, fixed income, mutual funds, and ETFs as well as Morningstar’s proprietary methodologies and data sets. Build and maintain Jupyter notebooks and Excel-based calculation templates to streamline and automate repetitive analytical tasks. Maintain comprehensive documentation of the procedures used to resolve media requests. Requirements: Bachelor’s degree in financial discipline Progress towards CFA Level I preferred 3–5 years of experience in data journalism, investment research, or quantitative roles ideally with a focus on media or client-facing communication Excellent written and verbal communication skills Advanced proficiency in Microsoft Excel, including macros and VBA Experience coding in SQL and Python Strong organizational skills with the ability to manage multiple projects simultaneously under tight deadlines High attention to detail and accuracy in data analysis and reporting in Excel Ability to work both independently and collaboratively with minimal supervision Good to have: Familiarity with Morningstar products, experience with European or Asian markets and understanding of ESG investing Morningstar is an equal opportunity employer

Posted 1 week ago

Apply

2.0 - 5.0 years

7 - 12 Lacs

mumbai

Work from Office

Structured Finance, Associate Quant Analyst (Mumbai) The Team : DBRS Morningstar is a global credit ratings business with about 800 employees in eight offices globally. Formed through the acquisition of DBRS by Morningstar, Inc., the credit ratings business is the fourth-largest provider of credit ratings in the world. DBRS Morningstar is committed to empowering investor success, serving the market through leading-edge technology, and raising the bar for the industry. DBRS Morningstar is a market leader in Canada, the U.S. and Europe in multiple asset classes. DBRS Morningstar rates more than 4,000 issuers and 56,000 securities worldwide, and is driven to bring more clarity, diversity, and responsiveness to the ratings process. Visit: https://www.dbrsmorningstar.com/learn/dbrsmorningstar to learn more. Credit Operations Mumbai Analytics team enables and supports the efficient and effective delivery of credit ratings/information to the market with its specialized skills and assets, consistent frameworks, and economies of scale. We collaborate with stakeholders to build creative, impactful solutions and offer services for the business and the market. About the Role : DBRS Morningstar Structured Finance team is looking for candidate with good problem solving, analytical & technical mindset. As an Associate Quant Analyst, you will work with team to automate data analysis processes to include document downloads, data storage, build and run data analytics to aid rating, research, and surveillance process, develop and enhance data analysis and workflow optimization tools; assist with special projects/initiatives, as needed. Proficiency in Python, SQL and VBA will be needed. This role will provide unique opportunities for mastering the key aspects of our business including in-depth collateral and deal analysis. This position is based in our Navi Mumbai office. Responsibilities: Work directly with Internal & external team to deliver projects and services. Perform, when possible, quantitative analysis in order to measure outcomes. Assist the team with transforming, improving and integrating data, depending on the business requirements. Combining the data result sets across multiple sources Understand core concepts around data storage and access specifically in structured data systems such as databases (SQL, Athena, AWS S3) Develop and maintain API’s to integrate internal and external data sources. Participate actively in the design and build phases, to aim at producing high quality deliverables. Have a mindset to bring about process efficiencies and ideate automations Collect, organize, and study data from internal and external sources for use in criteria development, ratings, and research reports. Take ownership of the tasks with focus on quality and accuracy of the deliverables Demonstrate strong learning curve Highly organized and efficient, with ability to multi-task and meet tight deadlines Ensure compliance with regulatory and company policies and procedures Requirements: Bachelor’s degree in Engineering or other quantitative discipline, Economics, Finance or Management Studies. Masters, CFA or CFA program enrollment are a plus 2-3 years of experience working with financial products using Python. Proficiency in Python / Anaconda, Data science stack (Jupyter, Pandas, NumPy), Microsoft Excel, Visual Basic for Applications (VBA) and MSSQL. Proficiency in object-oriented programming is a plus. Strong attention to detail and accuracy Highly motivated, self-starter who is keen to learn, has a positive attitude and a strong work ethic Ability to manage multiple tasks at the same time and deliver results in a timely manner Good inter-personal skills and ability to participate/ contribute as a team player Morningstar DBRS is an equal opportunity employer.

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 14 Lacs

gurugram

Work from Office

Responsibilities: * Design, develop, test & maintain Python applications using Pandas, NumPy, Data Bricks & Matplotlib. *Immediate joiners only. *Data bricks exp is a must

Posted 2 weeks ago

Apply

1.0 - 2.0 years

6 - 9 Lacs

gurugram, delhi / ncr

Work from Office

Important Note: Read till the end and send application on our Whatsapp no. to be considered. ---- Get ready to be a part of fast-growing team making next generation marketing platform to make it really simple to launch marketing campaigns with AI Agents. We have been recognized among the Tech50 companies of 2021 by Yourstory and got Startup Maharathi award at Startup Mahakumbh 2025. Tired of being a small cog in a big machine? At Intellemo, you won't just be writing codeyou'll be a core member of our engineering team, building the brain behind our AI marketing agents that are changing the game for thousands of businesses. We are a funded, profitable, fast-growing startup on a mission to make sophisticated marketing and sales automation accessible to everyone. The Opportunity This isn't just another backend role. As our core backend hire, you will have unparalleled ownership, a direct impact on our product's success, and the opportunity to work directly alongside the CEO/CTO to shape our entire technical foundation. You will be instrumental in solving our biggest bottleneck and paving the way for us to scale 10x. If you thrive on challenges and want your work to matter, this is the role for you. Key Responsibilities: Develop and maintain our backend services , including building and consuming RESTful APIs and working with our GraphQL API gateway. Integrate with external platforms such as Google Ads, Meta Ads, Linkedin Ads, Pinterest, etc. to automate campaign management and reporting. Connect with AI/ML APIs and contribute to the development of our agentic AI capabilities. Orchestrate agentic behavior using internal Agentic framework to create intelligent, automated marketing workflows. Build and optimize our creative rendering engine for generating images and videos using libraries like wand, ImageMagick, and FFmpeg. Develop web scraping and data extraction capabilities for website/landing page analysis using tools like BeautifulSoup. Contribute to our microservices-oriented architecture , ensuring our services are scalable, maintainable, and resilient. Collaborate with frontend developers to ensure seamless integration of our backend services with the user interface. Write clean, efficient, and well-documented code , and participate in code reviews to maintain our high standards of quality. Required Skills and Qualifications: 1-2 years of professional experience as a Python Backend Engineer with Gen AI focus. Strong proficiency in Python FastAPI. Demonstrable experience with building and integrating with third-party APIs, particularly for marketing related platforms like Google Ads and Meta Ads. A solid understanding of AI/ML concepts and experience working with AI/ML APIs. Familiarity with Langchain, MCP or similar frameworks for building applications with large language models (LLMs). Experience with image or video processing libraries such as CV2, ImageMagick, or FFmpeg. Proficiency with web scraping libraries like BeautifulSoup. A good understanding of microservices architecture and its principles. Familiarity with GraphQL and experience working with API gateways. Solid knowledge of relational and NoSQL databases (e.g., PostgreSQL, MongoDB). A Bachelor's degree in Computer Science is a must. What We Offer Direct Mentorship: You will be mentored directly by the CEO/CTO, offering a unique learning opportunity you won't find anywhere else. Unmatched Impact & Ownership: See the code you write immediately affect our clients and our company's bottom line. No bureaucracy, just pure building. A Problem-Solver's Paradise: We offer a constant stream of complex and fascinating challenges at the intersection of AI, marketing, and creative automation. Rapid Growth Trajectory: As a critical early hire, you'll be on the fast track for technical leadership as the company scales. Competitive Salary: A salary in the range of 7-10 LPA + option of ESOPs post probation. Location: This is a full-time, in-office position at our Gurgaon, Haryana office. We don't have Work from home role or remote work to iterate and build faster. We believe in the power of in-person collaboration and are excited to build a strong, cohesive team. How to Apply To apply, click on https://wa.me/917574863996?text=Hi+team,+saw+this+on+Naukri.+I+want+to+work+with+Intellemo or send Whatsapp to + 917574863996 Shortlisted candidates will be contacted to schedule the next steps in the interview process.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are a proactive and experienced senior business intelligence analyst with over 5 years of expertise in providing data-driven strategic business solutions. Your strong background in data analysis, team leadership, and project management, along with a proven track record of solving descriptive, inquisitive, and prescriptive analytics problems, makes you the ideal candidate for this role. Your responsibilities will include leading and managing a team of data analysts to design, analyze, measure, and deploy multiple campaigns. You will develop and enhance reports to summarize key business metrics and provide strategic insights. Automating processes to reduce manual intervention and improve efficiency will be a key part of your role. Conducting SQL training sessions and mentoring team members to enhance their technical and analytical skills will also be essential. You will create frameworks and statistical models to support business decisions and campaign rollouts. Developing and maintaining dashboards and data visualizations to track and analyze key performance indicators (KPIs) will be crucial. Collaborating with stakeholders to understand business needs and deliver actionable insights, as well as managing vendor relationships and developing tools to streamline vendor management processes, are also part of your responsibilities. Your qualifications include 5 to 8 years of relevant experience. You should have proficiency in SQL, Python, Tableau, Power BI, Looker, Jupyter Notebook, GCP (Google Cloud Platform), Microsoft Excel, and PowerPoint. A strong understanding of statistical concepts and techniques, including hypothesis testing, A/B testing, regression analysis, and clustering, is required. Experience with data visualization and reporting tools is also necessary. Excellent project management and business communication skills are essential, along with the ability to lead and mentor a team, fostering a collaborative and growth-oriented environment.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies