Jobs
Interviews

406 Plotly Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Marketing Measurement & Optimization Analyst Job Description: Qualifications: Bachelor's degree in Statistics, Mathematics, Computer Science, Engineering, or a related field. Proven 0-2 years of experience in a similar role. Strong problem-solving skills. Excellent communication skills. Skills: Proficiency in R (tidyverse, plotly/ggplot2), or Python (pandas, numpy), for data manipulation and visualization, and SQL (joins, aggregation, analytics functions) for data handling. Ability to understand marketing data and perform statistical tests. Knowledge of data visualization tools such as Tableau or Power BI. Responsibilities: Familiar with Media Mix Modelling, Multi-Touch Attribution. Knowledge of panel data and its analysis. Understand of Data Science workflow. Familiarity with marketing channels, performance & effectiveness metrics, and conversion funnel. Work with large data and performing data QA & manipulation tasks such as joins/merge, aggregation & segregation, append. Location: Pune Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About us Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with The Consumer Products Center of Expertise collaborates with Bain’s global Consumer Products Practice leadership, client-facing Bain leadership and teams, and with end clients on development and delivery of Bain’s proprietary CP products and solutions. These solutions aim to answer strategic questions of Bain’s CP clients relating to brand strategy (consumer needs, assortment, pricing, distribution), revenue growth management (pricing strategy, promotions, profit pools, trade terms), negotiation strategy with key retailers, optimization of COGS etc. You will work as part of the team in CP CoE comprising of a mix of Director, Managers, Projects Leads, Associates and Analysts working to implement cloud-based end-to-end advanced analytics solutions. Delivery models on projects vary from working as part of a CP Center of Expertise, broader global Bain case team within the CP ringfence, or within other industry CoEs such as FS / Retail / TMT / Energy / CME / etc with BCN on need basis The AS is expected to have a knack for seeking out challenging problems and coming up with their own ideas, which they will be encouraged to brainstorm with their peers and managers. They should be willing to learn new techniques and be open to solving problems with an interdisciplinary approach. They must have excellent coding skills and should demonstrate a willingness to write modular, reusable, and functional code. What you’ll do Collaborate with data scientists working with Python, LLMs, NLP, and Generative AI to design, fine-tune, and deploy intelligent agents and chains-based applications. Develop and maintain front-end interfaces for AI and data science applications using React.js / Angular / Nextjs and/or Streamlit/ DASH, enhancing user interaction with complex machine learning and NLP-driven systems. Build and integrate Python-based machine learning models with backend systems via RESTful APIs using frameworks like FastAPI / Flask or Django. Translate complex business problems into scalable technical solutions, integrating AI capabilities with robust backend and frontend systems. Assist in the design and implementation of scalable data pipelines and ETL workflows using DBT, PySpark, and SQL, supporting both analytics and generative AI solutions. Leverage containerization tools like Docker and utilize Git for version control, ensuring code modularity, maintainability, and collaborative development. Deploy ML-powered and data-driven applications on cloud platforms such as AWS or Azure, optimizing for performance, scalability, and cost-efficiency. Contribute to internal AI/ML Ops platforms and tools, streamlining model deployment, monitoring, and lifecycle management. Create dashboards, visualizations, and presentations using tools like Tableau/ PowerBI, Plotly, and Seaborn to drive business insights. Proficient with Excel, and PowerPoint by showing proficiency in business communication through stakeholder interactions. About you A Master’s degree or higher in Computer Science, Data Science, Engineering, or related fields OR Bachelor's candidates with relevant industry experience will also be considered. Proven experience (2 years for Master’s; 3+ years for Bachelor’s) in AI/ML, software development, and data engineering. Solid understanding of LLMs, NLP, Generative AI, chains, agents, and model fine-tuning methodologies. Proficiency in Python, with experience using libraries such as Pandas, Numpy, Plotly, and Seaborn for data manipulation and visualization. Experience working with modern Python frameworks such as FastAPI for backend API development. Frontend development skills using HTML, CSS, JavaScript/TypeScript, and modern frameworks like React.js; Streamlit knowledge is a plus. Strong grasp of data engineering concepts – including ETL pipelines, batch processing using DBT and PySpark, and working with relational databases like PostgreSQL, Snowflake etc. Good working knowledge of cloud infrastructure (AWS and/or Azure) and deployment best practices. Familiarity with MLOps/AI Ops tools and workflows including CI/CD pipelines, monitoring, and container orchestration (with Docker and Kubernetes). Good-to-have: Experience in BI tools such as Tableau or PowerBI, Good-to-have: Prior exposure to consulting projects or CP (Consumer Products) business domain. What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 3 weeks ago

Apply

0.0 - 12.0 years

0 Lacs

Delhi, Delhi

On-site

About us Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with The Consumer Products Center of Expertise collaborates with Bain’s global Consumer Products Practice leadership, client-facing Bain leadership and teams, and with end clients on development and delivery of Bain’s proprietary CP products and solutions. These solutions aim to answer strategic questions of Bain’s CP clients relating to brand strategy (consumer needs, assortment, pricing, distribution), revenue growth management (pricing strategy, promotions, profit pools, trade terms), negotiation strategy with key retailers, optimization of COGS etc. You will work as part of the team in CP CoE comprising of a mix of Director, Managers, Projects Leads, Associates and Analysts working to implement cloud-based end-to-end advanced analytics solutions. Delivery models on projects vary from working as part of a CP Center of Expertise, broader global Bain case team within the CP ringfence, or within other industry CoEs such as FS / Retail / TMT / Energy / CME / etc with BCN on need basis The AS is expected to have a knack for seeking out challenging problems and coming up with their own ideas, which they will be encouraged to brainstorm with their peers and managers. They should be willing to learn new techniques and be open to solving problems with an interdisciplinary approach. They must have excellent coding skills and should demonstrate a willingness to write modular, reusable, and functional code. What you’ll do Collaborate with data scientists working with Python, LLMs, NLP, and Generative AI to design, fine-tune, and deploy intelligent agents and chains-based applications. Develop and maintain front-end interfaces for AI and data science applications using React.js / Angular / Nextjs and/or Streamlit/ DASH, enhancing user interaction with complex machine learning and NLP-driven systems. Build and integrate Python-based machine learning models with backend systems via RESTful APIs using frameworks like FastAPI / Flask or Django. Translate complex business problems into scalable technical solutions, integrating AI capabilities with robust backend and frontend systems. Assist in the design and implementation of scalable data pipelines and ETL workflows using DBT, PySpark, and SQL, supporting both analytics and generative AI solutions. Leverage containerization tools like Docker and utilize Git for version control, ensuring code modularity, maintainability, and collaborative development. Deploy ML-powered and data-driven applications on cloud platforms such as AWS or Azure, optimizing for performance, scalability, and cost-efficiency. Contribute to internal AI/ML Ops platforms and tools, streamlining model deployment, monitoring, and lifecycle management. Create dashboards, visualizations, and presentations using tools like Tableau/ PowerBI, Plotly, and Seaborn to drive business insights. Proficient with Excel, and PowerPoint by showing proficiency in business communication through stakeholder interactions. About you A Master’s degree or higher in Computer Science, Data Science, Engineering, or related fields OR Bachelor's candidates with relevant industry experience will also be considered. Proven experience (2 years for Master’s; 3+ years for Bachelor’s) in AI/ML, software development, and data engineering. Solid understanding of LLMs, NLP, Generative AI, chains, agents, and model fine-tuning methodologies. Proficiency in Python, with experience using libraries such as Pandas, Numpy, Plotly, and Seaborn for data manipulation and visualization. Experience working with modern Python frameworks such as FastAPI for backend API development. Frontend development skills using HTML, CSS, JavaScript/TypeScript, and modern frameworks like React.js; Streamlit knowledge is a plus. Strong grasp of data engineering concepts – including ETL pipelines, batch processing using DBT and PySpark, and working with relational databases like PostgreSQL, Snowflake etc. Good working knowledge of cloud infrastructure (AWS and/or Azure) and deployment best practices. Familiarity with MLOps/AI Ops tools and workflows including CI/CD pipelines, monitoring, and container orchestration (with Docker and Kubernetes). Good-to-have: Experience in BI tools such as Tableau or PowerBI, Good-to-have: Prior exposure to consulting projects or CP (Consumer Products) business domain. What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh

On-site

Proficiency in Python, especially for data extraction and automation tasks. Strong experience with web scraping frameworks such as Scrapy, BeautifulSoup, or Selenium. Hands-on experience building automated data pipelines using tools like Airflow, Luigi, or custom schedulers. Knowledge of web data collection techniques, including handling pagination, AJAX, JavaScript-rendered content, and rate-limiting. Familiarity with RESTful APIs and techniques for API-based data ingestion. Experience with data storage solutions, such as PostgreSQL, MongoDB, or cloud-based storage (e.g., AWS S3, Google Cloud Storage). Version control proficiency, especially with Git. Ability to write clean, modular, and well-documented code. Strong debugging and problem-solving skills in data acquisition workflows. Nice-to-Haves Experience with cloud platforms (AWS, GCP, or Azure) for deploying and managing data pipelines. Familiarity with containerization tools like Docker. Knowledge of data quality monitoring and validation techniques. Exposure to data transformation tools (e.g., dbt). Understanding of ethical and legal considerations in web scraping. Experience working with CI/CD pipelines for data workflows. Familiarity with data visualization tools (e.g., Tableau, Power BI, or Plotly) for quick insights. Background in data science or analytics to support downstream use cases. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Proficiency in Python, especially for data extraction and automation tasks. Strong experience with web scraping frameworks such as Scrapy, BeautifulSoup, or Selenium. Hands-on experience building automated data pipelines using tools like Airflow, Luigi, or custom schedulers. Knowledge of web data collection techniques, including handling pagination, AJAX, JavaScript-rendered content, and rate-limiting. Familiarity with RESTful APIs and techniques for API-based data ingestion. Experience with data storage solutions, such as PostgreSQL, MongoDB, or cloud-based storage (e.g., AWS S3, Google Cloud Storage). Version control proficiency, especially with Git. Ability to write clean, modular, and well-documented code. Strong debugging and problem-solving skills in data acquisition workflows. Nice-to-Haves Experience with cloud platforms (AWS, GCP, or Azure) for deploying and managing data pipelines. Familiarity with containerization tools like Docker. Knowledge of data quality monitoring and validation techniques. Exposure to data transformation tools (e.g., dbt). Understanding of ethical and legal considerations in web scraping. Experience working with CI/CD pipelines for data workflows. Familiarity with data visualization tools (e.g., Tableau, Power BI, or Plotly) for quick insights. Background in data science or analytics to support downstream use cases.

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Description Stat II/ Senior/ Principal Stat Programmer (R / R Shiny) Syneos Health® is a leading fully integrated biopharmaceutical solutions organization built to accelerate customer success. We translate unique clinical, medical affairs and commercial insights into outcomes to address modern market realities. Our Clinical Development model brings the customer and the patient to the center of everything that we do. We are continuously looking for ways to simplify and streamline our work to not only make Syneos Health easier to work with, but to make us easier to work for. Whether you join us in a Functional Service Provider partnership or a Full-Service environment, you’ll collaborate with passionate problem solvers, innovating as a team to help our customers achieve their goals. We are agile and driven to accelerate the delivery of therapies, because we are passionate to change lives. Discover what our 29,000 employees, across 110 countries already know: WORK HERE MATTERS EVERYWHERE Why Syneos Health We are passionate about developing our people, through career development and progression; supportive and engaged line management; technical and therapeutic area training; peer recognition and total rewards program. We are committed to our Total Self culture – where you can authentically be yourself. Our Total Self culture is what unites us globally, and we are dedicated to taking care of our people. We are continuously building the company we all want to work for and our customers want to work with. Why? Because when we bring together diversity of thoughts, backgrounds, cultures, and perspectives – we’re able to create a place where everyone feels like they belong. Job Responsibilities Proficiency in R Programming Advanced Shiny Development, including reactivity, modules, layouts, and custom input/output components. Web Development Technologies: Knowledge of HTML, CSS, and JavaScript for customizing the appearance and behaviour of Shiny applications. Familiarity with Python and Selenium testing framework Data Visualization: Ability to create interactive and insightful visualizations using libraries like Plotly, within Shiny apps. Database Integration: Experience in integrating Shiny applications with databases (e.g., SQL databases) for data storage, retrieval, and manipulation. Version Control: Proficiency in Git for collaborative development and code management. Performance Optimization: Skills to optimize Shiny applications for performance, including minimizing load times, improving responsiveness, and handling large datasets efficiently. Testing and Debugging: Ability to write unit tests, perform debugging, and troubleshoot issues within Shiny applications. Strong communication skills to understand requirements, document application features, and effectively communicate technical concepts. Agile experience Qualifications Require 3+ years of experience in R Programming Advanced Shiny Development, including reactivity, modules, layouts, and custom input/output components. Web Development Technologies: Knowledge of HTML, CSS, and JavaScript for customizing the appearance and behaviour of Shiny applications. Familiarity with Python and Selenium testing framework Data Visualization: Ability to create interactive and insightful visualizations using libraries like Plotly, within Shiny apps. Database Integration: Experience in integrating Shiny applications with databases (e.g., SQL databases) for data storage, retrieval, and manipulation. Version Control: Proficiency in Git for collaborative development and code management. Performance Optimization: Skills to optimize Shiny applications for performance, including minimizing load times, improving responsiveness, and handling large datasets efficiently. Testing and Debugging: Ability to writeunit tests, perform debugging, and troubleshoot issues within Shiny applications. Strong communication skills to understand requirements, document application features, and effectively communicate technical concepts. Agile experience Get to know Syneos Health Over the past 5 years, we have worked with 94% of all Novel FDA Approved Drugs, 95% of EMA Authorized Products and over 200 Studies across 73,000 Sites and 675,000+ Trial patients. No matter what your role is, you’ll take the initiative and challenge the status quo with us in a highly competitive and ever-changing environment. Learn more about Syneos Health. http://www.syneoshealth.com Additional Information Tasks, duties, and responsibilities as listed in this job description are not exhaustive. The Company, at its sole discretion and with no prior notice, may assign other tasks, duties, and job responsibilities. Equivalent experience, skills, and/or education will also be considered so qualifications of incumbents may differ from those listed in the Job Description. The Company, at its sole discretion, will determine what constitutes as equivalent to the qualifications described above. Further, nothing contained herein should be construed to create an employment contract. Occasionally, required skills/experiences for jobs are expressed in brief terms. Any language contained herein is intended to fully comply with all obligations imposed by the legislation of each country in which it operates, including the implementation of the EU Equality Directive, in relation to the recruitment and employment of its employees. The Company is committed to compliance with the Americans with Disabilities Act, including the provision of reasonable accommodations, when appropriate, to assist employees or applicants to perform the essential functions of the job.

Posted 4 weeks ago

Apply

7.0 - 12.0 years

0 Lacs

Chennai

On-site

Job Summary: We are looking for a skilled Python Developer with 7 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Job Features Job Category Software Division

Posted 4 weeks ago

Apply

3.0 - 6.0 years

3 - 4 Lacs

Chennai

On-site

Job Summary: We are looking for a skilled Python Developer with 3 to 6 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Job Features Job Category Software Division

Posted 4 weeks ago

Apply

6.0 years

0 Lacs

India

Remote

About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Title: Lead Data Scientist Mode of work : Remote Responsibilities Design and implement data-driven solutions to optimize customer experience metrics, reduce churn, and enhance customer satisfaction using statistical analysis, machine learning, and predictive modeling. Collaborate with CX teams, contact center operations, customer success, and product teams to gather requirements, understand customer journey objectives, and translate them into actionable analytical solutions. Perform exploratory data analysis (EDA) on customer interaction data, contact center metrics, survey responses, and behavioral data to identify pain points and opportunities for CX improvement. Build, validate, and deploy machine learning models for customer sentiment analysis, churn prediction, next-best-action recommendations, contact center forecasting, and customer lifetime value optimization. Develop CX dashboards and reports using BI tools to track key metrics like NPS, CSAT, FCR, AHT, and customer journey analytics to support strategic decision-making. Optimize model performance for real-time customer experience applications through hyperparameter tuning, A/B testing, and continuous performance monitoring. Contribute to customer data architecture and pipeline development to ensure scalable and reliable customer data flows across touchpoints (voice, chat, email, social, web). Document CX analytics methodologies, customer segmentation strategies, and model outcomes to ensure reproducibility and enable knowledge sharing across CX transformation initiatives. Mentor junior data scientists and analysts on CX-specific use cases, and participate in code reviews to maintain high-quality standards for customer-facing analytics. Skill Requirements Proven experience (at least 6+ years) in data science, analytics, and statistical modeling with specific focus on customer experience, contact center analytics, or customer behavior analysis, including strong understanding of CX metrics, customer journey mapping, and voice-of-customer analytics. Proficiency in Python and/or R for customer data analysis, sentiment analysis, and CX modeling applications. Experience with data analytics libraries such as pandas, NumPy, scikit-learn, and visualization tools like matplotlib, seaborn, or Plotly for customer insights and CX reporting. Experience with machine learning frameworks such as Scikit-learn, XGBoost, LightGBM, and familiarity with deep learning libraries (TensorFlow, PyTorch) for NLP applications in customer feedback analysis and chatbot optimization. Solid understanding of SQL and experience working with customer databases, contact center data warehouses, and CRM systems (e.g., PostgreSQL, MySQL, SQL Server, Salesforce, ServiceNow). Familiarity with data engineering tools and frameworks (e.g., Apache Airflow, dbt, Spark, or similar) for building and orchestrating customer data ETL pipelines and real-time streaming analytics. (Good to have) Knowledge of data governance, data quality frameworks, and data lake architectures. (good to have) Exposure to business intelligence (BI) tools such as Power BI, Tableau, or Looker for CX dashboarding, customer journey visualization, and executive reporting on customer experience metrics. Working knowledge of version control systems (e.g., Git) and collaborative development workflows for customer analytics projects. Strong problem-solving skills with customer-centric analytical thinking, and the ability to work independently and as part of cross-functional CX transformation teams. Excellent communication and presentation skills, with the ability to explain complex customer analytics concepts to non-technical stakeholders including CX executives, contact center managers, and customer success teams. Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 4 weeks ago

Apply

2.0 years

0 Lacs

Delhi, India

On-site

Digital India Corporation is currently inviting applications for the position of Developer – Data Analytics purely on Contract/ Consolidated basis for Poshan Tracker project. Location: Delhi, Noida, Others No. of Positions: 2 Qualifications: B. E / B. TECH / MCA or any Equivalent Degree Experience: 2 + years Required Skillset Hands on with any data analytics tools, Power BI or Tableau and Python libraries like Plotly and Folium to create dynamic data visualizations A track record of strong collaboration skills when building data solutions and Big Data Solutions. Proven ability to present progress and findings to stakeholders Support stakeholders in taking appropriate decisions both central and state decision makers. Creating automated anomaly detection systems and constant tracking of its performance as per program KPI. Knowledge of Python, R, QGIS, advanced Excel etc. Good written and oral communication skills. Good presentation and analytical ability. Experience of working for a government setup/ project is desirable. Knowledge of WHO Child Growth standards and anthropometric data quality checks as Poshan Tracker use these standards to determine nutritional status of children. Experience of analyzing big data and preparing region wise visualizations for effective decision making. Strong understanding of public policy to align data analytics with evidence-based decision making. Knowledge of AI/ML and capable of making data AI ready to be used for Current usage and Trend Predictions. Design and implement optimized data pipelines to clean, standardize, and transform beneficiary-level data from Poshan Tracker, ensuring its structured and ready for AI/ML applications. Collaborate closely with program teams, developers, and decision-makers to translate analytical needs into technical solutions, including the integration of important indicators into the Poshan Tracker dashboard. Lead the development of anomaly detection systems and dynamic reporting modules to flag irregularities in service delivery and identify high-risk regions for malnutrition. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Applying strong programming and problem-solving skills to develop scalable solutions. Chennai The last date for submission of applications shall be 20th July 2025. Note: This is an aggregated job, sharing with a motive to intimate relevant opportunities with job seekers. Hireclap is not responsible / authorized for this recruitment process.

Posted 4 weeks ago

Apply

7.0 years

0 Lacs

Delhi, India

On-site

Role Expectations: Data Collection and Cleaning: Collect, organize, and clean large datasets from various sources (internal databases, external APIs, spreadsheets, etc.). Ensure data accuracy, completeness, and consistency by cleaning and transforming raw data into usable formats. Data Analysis: Perform exploratory data analysis (EDA) to identify trends, patterns, and anomalies. Conduct statistical analysis to support decision-making and uncover insights. Use analytical methods to identify opportunities for process improvements, cost reductions, and efficiency enhancements. Reporting and Visualization: Create and maintain clear, actionable, and accurate reports and dashboards for both technical and non-technical stakeholders. Design data visualizations (charts, graphs, and tables) that communicate findings effectively to decision-makers. Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience in generating the Descriptive , Predictive & prescriptive Insights with Gen AI using MS Copilot in PowerBI. Experience in Prompt Engineering & RAG Architectures Prepare reports for upper management and other departments, presenting key findings and recommendations. Collaboration: Work closely with cross-functional teams (marketing, finance, operations, etc.) to understand their data needs and provide actionable insights. Collaborate with IT and database administrators to ensure data is accessible and well-structured. Provide support and guidance to other teams regarding data-related questions or issues. Data Integrity and Security: Ensure compliance with data privacy and security policies and practices. Maintain data integrity and assist with implementing best practices for data storage and access. Continuous Improvement: Stay current with emerging data analysis techniques, tools, and industry trends. Recommend improvements to data collection, processing, and analysis procedures to enhance operational efficiency. Qualifications: Education: Bachelor's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. A Master's degree or relevant certifications (e.g., in data analysis, business intelligence) is a plus. Experience: Proven experience as a Data Analyst or in a similar analytical role (typically 7+ years). Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Strong knowledge of SQL and experience with relational databases. Familiarity with data manipulation and analysis tools (e.g., Python, R, Excel, SPSS). Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Technical Skills: Proficiency in SQL and data query languages. Knowledge of statistical analysis and methodologies. Experience with data visualization and reporting tools. Knowledge of data cleaning and transformation techniques. Familiarity with machine learning and AI concepts is an advantage (for more advanced roles). Soft Skills: Strong analytical and problem-solving abilities. Excellent attention to detail and ability to identify trends in complex data sets. Good communication skills to present data insights clearly to both technical and non-technical audiences. Ability to work independently and as part of a team. Strong time management and organizational skills, with the ability to prioritize tasks effectively.

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and analytical Data Scientist with 5+ years of experience in building and deploying advanced machine learning models and delivering actionable business insights. The ideal candidate will have a strong foundation in statistics, machine learning, and data engineering, and the ability to work with large-scale datasets to solve real-world problems. Key Responsibilities Analyze large datasets to extract meaningful insights and develop predictive and prescriptive models using machine learning algorithms. Design, develop, and validate ML models for classification, regression, clustering, and recommendation systems. Perform data wrangling, cleaning, transformation, and feature extraction to prepare datasets for modeling. Work closely with product, engineering, and business teams to understand problems and translate them into data-driven solutions. Create dashboards, reports, and data visualizations to communicate findings clearly and effectively. Deploy models into production environments and monitor performance, retraining when necessary. Stay current with the latest research in AI/ML and apply cutting-edge techniques to business use cases. Required Skills & Qualifications 5+ years of experience in data science, machine learning, or advanced analytics. Strong proficiency in Python or R, along with libraries such as scikit-learn, pandas, NumPy, TensorFlow, PyTorch, etc. Proficiency in SQL and experience with data querying in relational and non-relational databases. Experience working with big data platforms (e.g., Spark, Hadoop) is a plus. Solid understanding of statistics, probability, and model evaluation techniques. Experience with data visualization tools like Tableau, Power BI, Plotly, or Skills : Experience with cloud platforms (AWS, GCP, Azure) and MLOps tools (MLflow, Kubeflow). Exposure to NLP, deep learning, or time series forecasting. Familiarity with Git, Docker, and CI/CD for model deployment. Prior experience in domains such as finance, retail, healthcare, or telecom is advantageous. (ref:hirist.tech)

Posted 4 weeks ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Title Software Engineer Location: Multiple Locations Job Term: Full-Time The Opportunity: At Picarro, Software Engineering focuses on developing and deploying industry vertical applications to clients in the Scientific and Energy communities. This specific role is focused on the suite of solutions, such as greenhouse gas emissions quantification, pipe replacement, and advanced leak detection, used by our gas utility and pipeline customers. The majority have a web-based user interface, but the backend utilizes geoprocessing, data, and ML services. While the products are designed to meet the needs of the industry, they sit within Picarros larger analytical suite/ distributed framework, so a wider collection of skills is desired. The software engineer participates in the design, programming, testing, documentation and implementation of applications and related processes/systems. You may also be required to identify and evaluate development options, assess future needs for key technologies and techniques and develop plans for adoption. This position reports to the Software Development Manager. The position will be on site based out of our Bangalore, India office. Key Responsibilities: Work directly with product stakeholders and product management to understand product use cases and synthesize business requirements. Design, develop, and deploy high-performance multi-tenant dashboard applications using Dash Enterprise. Write production-quality code that creates responsive web applications. Handle multiple technical requests and project deadlines simultaneously. Collaborate daily with a global team of software engineers, data scientists, analysts, and product managers using a variety of online communication channels. Apply software development best practices including version control (Git), code review, and testing. Document technical detail of work using Jira and Confluence. Desired Skills and Experience: 5+ years of overall software development experience. 5+ years of experience developing responsive web applications using HTML, CSS, and JavaScript. 3+ years of Python experience, specifically in an object-oriented structure. Experience with common data analytics and visualization libraries such as Numpy, Pandas, Json, Sqlalchemy, Plotly, and/or Matplotlib. Experience with geospatial libraries such as Shapely, GeoPandas, GDAL/OGR, and PyProj are a plus. 1+ years with SQL for analytical use cases. 1+ years of experience with a modern web UI library, like React, Vue, Angular, or Svelte. 1+ years of experience developing web applications using Python. 1+ years of experience with at least one common data visualization tool such as Tableau, PowerBI, Qlik, or Dash Enterprise. 1+ years of cloud development (e.g. AWS, Azure, Google Cloud) and software container technologies (e.g.Docker, Kubernetes). Familiar with Agile methodologies and processes. Familiar with Gas Distribution company processes and/or pipeline and distribution network data. Bachelor or masters degree in computer science, engineering, GIS, geography, or related field. About Picarro: Picarro, Inc. is the world's leading producer of greenhouse gas and optical stable isotope instruments, which are used in a wide variety of scientific and industrial applications, including: atmospheric science, air quality, greenhouse gas measurements, gas leak detection, food safety, hydrology, ecology and more. The company's products are all designed and manufactured at Picarro's Santa Clara, California headquarters and exported to countries worldwide. Picarro's products are based on dozens of patents related to cavity ring-down spectroscopy (CRDS) technology. Picarros solutions are unparalleled in their precision, ease of use, portability, and reliability. Honors awarded the Company include the World Economic Forum Technology Innovation Pioneer, IHS CERA Energy Innovation Pioneer, the U.S. Department of Energy Small Business of the Year, the TiE50 Winner and the Red Herring Global 100 Winner. Key investors include Benchmark Capital Partners, Greylock Management Corporation, Duff, Ackerman & Goodrich, Stanford University, Focus Ventures, Mingxin China Growth Ltd., NTT Finance and Weston Presidio Capital. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, national origin, protected veteran status, gender identity, social orientation, nor on the basis of disability. Posted positions are not open to third party recruiters/agencies and unsolicited resume submissions will be considered free referral. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact Picarro, Inc. at disabilityassistance@picarro.com for assistance.

Posted 4 weeks ago

Apply

35.0 years

0 Lacs

Chennai

On-site

About us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What you'll do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What you'll need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What will help you on the job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior ML Engineer Minimum 4 to 8+ years of experience in ML development in Product Based Company Location: Bangalore (Onsite) Why should you choose us? Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! Required Skills and Expertise: Candidate Must have exp. Working in Product Based Company. Should be able to Build, train, and optimize deep learning models with TensorFlow, Keras, PyTorch, and Transformers. Should have exp. In Manipulate and analyse large-scale datasets using Python, Pandas, Numpy, Dask Apply advanced fine-tuning techniques (Full Fine-Tuning, PEFT) and strategies to large language and vision models. Implement and evaluate classical machine learning algorithms using scikit-learn, statsmodels, XGBoost etc. Develop and deploy scalable APIs for ML models using FastAPI. Should have exp. In performing data visualization and exploratory data analysis with Matplotlib, Seaborn, Plotly, and Bokeh. Collaborate with cross-functional teams to deliver end-to-end ML solutions. Deploy machine learning models for diverse business applications over the cloud native and on-premise Hands-on experience with Docker for containerization and Kubernetes for orchestration and scalable deployment of ML models. Familiarity with CI/CD pipelines and best practices for deploying and monitoring ML models in production. Stay current with the latest advancements in machine learning, deep learning, and AI. Our commitment to you: - Rakuten Group’s mission is to contribute to society by creating value through innovation and entrepreneurship. By providing high-quality services that help our users and partners grow, - We aim to advance and enrich society. - To fulfill our role as a Global Innovation Company, we are committed to maximizing both corporate and shareholder value. RAKUTEN SHUGI PRINCIPLES: Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance . Only be satisfied with complete success - Kaizen. Be passionately professional . Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team.

Posted 1 month ago

Apply

89.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Morgan Stanley Python Data Engineering Lead - Vice President - Software Engineering Profile Description We’re seeking someone to join our team as (Vice President) who is a collaborative, hands-on Python development lead to help build out its global data science environment for quants, sales & trading. DataZone is A next-generation, scalable data science environment that allows for collaboration across data, modelling and infrastructure. InstitutionalSecurities_Technology Institutional Securities Technology (IST) develops and oversees the overall technology strategy and bespoke technology solutions to drive and enable the institutional businesses and enterprise-wide functions. IST’s ‘clients’ include Fixed Income, Equities, Commodities, Investment Banking, Research, Prime Brokerage and Global Capital Markets. ESTAR Develops trading & Analytics platform for IST, structured OTC Trading, Market Making & risk platform for IED Software Engineering This is Vice President position that develops and maintains software solutions that support business needs. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What You’ll Do In The Role Datazone is a rapidly growing and high-priority area, and the growth in users and use-cases has warranted the investment from the business to build out a team in Mumbai, who will be in need of a strong tech hands on lead & Architect. The DZ platform is a developed-on Python but requires a technical expert who has done and worked on large complex system build out like one would build on C++ or Java distributed systems. Datazone is available both on-prem and will be running on cloud, this will require the individual to have good understanding and knowledge of the cloud technologies and Python best practices. You will work closely with DZ technical leads, architects, quants and traders on multiple trading desks to design, develop, deploy and support the innovative data science environment that utilizes the latest Python ecosystem. You will work within a global team in an agile environment, so strong communications skills are very important. Skilled Required What you’ll bring to the role: Experience developing & designing data-driven systems in Python for past many years Experience leading strong technical members and building their technical career Experience in Oops and build out of server-side system on C++ or Java System Architecture & Design Skills Cloud technologies & tools Agile/DevOps & SRE driven mindset Python data engineering stack including libraries such as Flask / Numpy / Pandas / Dask / Plotly / Jupyter Skills Desired Full stack HTML5 Web development experience KDB knowledge or experience with another high-performance timeseries database Experience with financial concepts such as equities / fixed income / options / futures Experience with AWS or Azure What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are looking for a Python Full Stack Developer with strong Azure DevOps and AI integration expertise to support the automation of Kanban workflows and real-time analytics in a scaled agile environment. You will design end-to-end automation for case management, build performance dashboards, and integrate AI-powered solutions using Azure OpenAI, Dataverse, and Power BI. The role requires a deep understanding of Python development, experience with Azure services, and the ability to collaborate with cross-functional teams to deliver high-quality solutions. Key Responsibilities Develop Python applications to automate Kanban case management integrated with Azure DevOps (ADO) Build and maintain REST APIs with access control for project and workload metrics Integrate Azure OpenAI services to automate delay analysis and generate custom summaries Design interactive dashboards using Python libraries (Pandas, Plotly, Dash) and Power BI Store, manage, and query data using Dataverse for workflow reporting and updates Leverage Microsoft Graph API and Azure SDKs for system integration and access control Collaborate with IT security, PMs, and engineering teams to gather requirements and deliver automation solutions Continuously improve security workflows, report generation, and system insights using AI and data modeling Required Skills & Experience 5+ years of Python development experience with FastAPI or Flask Hands-on experience with Azure DevOps, including its REST APIs Proficiency in Azure OpenAI, Azure SDKs, and Microsoft Graph API Strong understanding of RBAC (Role-Based Access Control) and permissions management Experience with Power BI, Dataverse, and Python data visualization libraries (Matplotlib, Plotly, Dash) Prior experience in Agile teams and familiarity with Scrum/Kanban workflows Excellent communication and documentation skills; able to explain technical concepts to stakeholders Benefits And Perks Opportunity to work with leading global clients Flexible work arrangements with remote options Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: pandas,azure sdks,microsoft graph api,rest apis,ai integration,microsoft power bi,power bi,dash,python,azure devops,fastapi,dataverse,rbac,azure openai,plotly,flask,matplotlib

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

For an international project in Chennai, we are urgently looking for a Full Remote Python Full Stack Developer. We are looking for a motivated contractor. Candidates need to be fluent in English. Tasks and responsibilities: Write Python programs to perform automated ADO Kanban case management solution, dashboards and reports; Develop and integrate Rest APIs with access control to provide case status and reports to specific to LOBs, Managers, etc.; Utilize Azure OpenAI frameworks to enhance delay analysis, vulnerability dashboard and reporting; Build dashboards using Python libraries (e.g., Pandas, Matplotlib, Plotly) to track case status from Kanban boards, delay per project/LOB/etc., Use Dataverse and PowerBi for data modelling and reporting as well; Collaboration and Support: Work closely with project managers, IT security staff, and system administrators to gather requirements, understand business needs, and develop solutions that improve security processes; Continuously evaluate and improve Kanban case management solution, leveraging new technologies and techniques, particularly AI and automation, to improve efficiency and effectiveness; Profile: Bachelor or Master degree; +5 years of hands-on experience with Python, particularly in frameworks like FastAPI, Flask, and experience using Azure OpenAI frameworks; Strong understanding of access control models such as Role-Based Access Control (RBAC); Expertise working in Azure DevOps and its Rest APIs for customizing it; Proficiency with Azure cloud services, Microsoft Graph API, and experience integrating Python applications; Experience in Dataverse, Power BI and reporting libraries in Python (Pandas, Matplotlib, Plotly, Dash) to build dashboards and reports; Ability to collaborate with various stakeholders, explain complex technical solutions, and deliver high-quality solutions on time; Experience working in Agile environments and familiarity with Scrum and Kanban methodologies for delivering the solutions; Fluent in English;

Posted 1 month ago

Apply

35.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What You'll Do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What You'll Need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What Will Help You On The Job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About lululemon: lululemon is an innovative performance apparel company for yoga, running, training, and other athletic pursuits. Setting the bar in technical fabrics and functional design, we create transformational products and experiences that support people in moving, growing, connecting, and being well. We owe our success to our innovative products, commitment to our people, and the incredible connections we make in every community we're in. As a company, we focus on creating positive change to build a healthier, thriving future. In particular, that includes creating an equitable, inclusive and growth-focused environment for our people. As we continue to elevate our shopping experience, our India Tech Hub plays a key role in expanding our technology capabilities in Product Systems, Merchandising and Planning, Digital Presence, distribution and logistics, and corporate systems. Our team in India works as an extension of the global team on projects of strategic importance. About the team: The opportunity is for a role in the Content team. Our team is fast-paced and taking on exciting new initiatives to support a fast-growing business. We strive to adopt leading-edge technology and seek to continuously improve on the solutions we have. We are not afraid to try new things, have fun, and encourage each other to take on new challenges. We support each other in growing professionally and personally. We fail forward and learn from our mistakes, thus creating a better path ahead. We create space for team members to share feedback and ideas that can help us continually grow as an organization. We operate in agile methodology and contribute to product teams across our various functions as well as a core commerce platform team. We institute a culture of fun and lightheartedness to enjoy work each day. We are inclusive and know that we are stronger as a team than as an individual. Responsibilities: Develop Statistical/Machine Learning models / analysis for Merchandising and Planning Business problems Play a key role in all stages of the Data science project life cycle Collaborate with Product Management, Business teams to gain Business understanding, Collect requirements Identify data sources required, automate collection process Perform pre-processing, exploratory data analysis Evaluation / Interpretation of results and presentation to Stakeholders, Business leaders Collaborate with engineering and product development teams for deploying models into production systems, when applicable Requirements and skills: Proven experience in technical delivery of solutions with Time Series / Machine Learning techniques Strong applied Statistical skills, including knowledge of Statistical tests, distributions etc. Strong applied Machine Learning skills (Time Series, Regression Analysis, Supervised and Unsupervised Learning) Strong Programming Skills in Python and database query languages like SQL . Familiarity with Snowflake & Databricks is an added advantage Experience with time series forecasting techniques such ARIMA, Prophet, Deep AR Experience with data visualization libraries like Plotly, Business intelligence tools (e.g. PowerBI, Tableau) Excellent communication and presentation skills Experience in Retail industry Responsibilities: Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams

Posted 1 month ago

Apply

35.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What You'll Do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What You'll Need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What Will Help You On The Job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelor’s degree in Computer Science, Data Science, Statistics, or related field. Master’s degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently. S YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Candidate Application Notice

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Kovilpatti, Tamil Nadu

On-site

Title: Senior Data Scientist Years of Experience : 8+ years Location : Onsite (The selected candidate is required to relocate to Kovilpatti, Tamil Nadu, for the initial three-month project training session . Post training, the candidate will be relocated to one of our onsite locations: Chennai, Hyderabad, or Pune , based on project allocation.) Job Description The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities · Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring · Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications · Conduct statistical analysis and hypothesis testing for business experimentation · Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) · Collaborate with data engineering teams to improve data pipelines and quality · Document methodologies, build reusable ML components, and publish technical artifacts · Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker Data visualization: Power BI, Tableau, Plotly, Seaborn SQL and NoSQL (CosmosDB, MongoDB) Git, CI/CD tools, and model monitoring platforms Qualification Master’s in Data Science, Statistics, Mathematics, or Computer Science Microsoft Certified: Azure Data Scientist Associate or equivalent Proven success in delivering production-ready ML models with measurable business impact Publications or patents in AI/ML will be considered a strong advantage. Job Type: Full-time Pay: Up to ₹80,000.00 per month Ability to commute/relocate: Kovilpatti, Tamil Nadu: Reliably commute or willing to relocate with an employer-provided relocation package (Required) Application Question(s): Expected salary in annual (INR)? Experience: Data Scientist: 8 years (Required) License/Certification: Azure Data Scientist Associate (Required) Work Location: In person

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Title: Senior Data Scientist Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai location for the initial six-month project training and execution period. After the six months , the candidate will be offered remote opportunities.* Job Description The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications Conduct statistical analysis and hypothesis testing for business experimentation Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) Collaborate with data engineering teams to improve data pipelines and quality Document methodologies, build reusable ML components, and publish technical artifacts Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker Data visualization: Power BI, Tableau, Plotly, Seaborn SQL and NoSQL (CosmosDB, MongoDB) Git, CI/CD tools, and model monitoring platforms Qualification Master’s in Data Science, Statistics, Mathematics, or Computer Science Microsoft Certified: Azure Data Scientist Associate or equivalent Proven success in delivering production-ready ML models with measurable business impact Publications or patents in AI/ML will be considered a strong advantage. Job Type: Full-time Pay: Up to ₹80,000.00 per month Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Expected salary in annual (INR)? Experience: Data Scientist: 8 years (Required) License/Certification: Azure Data Scientist Associate (Required) Work Location: In person

Posted 1 month ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Data Scientist 100% Remote Position Type: Contract Duration: 12 Months Job Description We are looking for a highly skilled Data Scientist with 6+ years of experience in handling large-scale real-world datasets-particularly from autonomous fleets, commercial trucks, or connected vehicles. The ideal candidate will have strong analytical, modeling, and machine learning capabilities and will help us unlock actionable insights from multi-modal data collected through our vehicles. Requirements Required Skills and Qualifications Bachelor's or master's degree in computer science, Data Science, Statistics, or related field. Minimum 6 years of hands-on experience in data science and analytics. Strong programming skills in Python and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Experience with big data tools like Spark, AWS/GCP data pipelines, or similar platforms. Deep understanding of time-series data, signal processing, and ML for spatiotemporal datasets. Experience working with connected vehicle data or telemetry from trucks/autonomous systems is highly preferred. Familiarity with vehicle dynamics, CAN data decoding, or driver behaviour modeling is a plus. Proficiency in SQL, data visualization tools (Tableau, PowerBI, Plotly, etc.).

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies