Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be responsible for working with MS SQL and Python, particularly with Pandas library. Your main tasks will include utilizing SQL Alchemy for data manipulation and providing production support. The ideal candidate should have strong skills in MS SQL and Python, as well as experience with Pandas and SQL Alchemy. A notice period of 0-30 days is required for this position. Candidates with any graduate degree can apply for this role. The job location is flexible and can be in Bangalore, Pune, Mumbai, Hyderabad, Chennai, Gurgaon, or Noida. For applying, please send your resume to [career@krazymantra.com](mailto:career@krazymantra.com).,
Posted 23 hours ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Data Engineer at our Bangalore office, you will play a crucial role in developing data pipeline solutions to meet business data needs. Your responsibilities will involve designing, implementing, and maintaining structured and semi-structured data models, utilizing Python and SQL for data collection, enrichment, and cleansing. Additionally, you will create data APIs in Python Flask containers, leverage AI for analytics, and build data visualizations and dashboards using Tableau. Your expertise in infrastructure as code (Terraform) and executing automated deployment processes will be vital for optimizing solutions for costs and performance. You will collaborate with business analysts to gather stakeholder requirements and translate them into detailed technical specifications. Furthermore, you will be expected to stay updated on the latest technical advancements, particularly in the field of GenAI, and recommend changes based on the evolving landscape of Data Engineering and AI. Your ability to embrace change, share knowledge with team members, and continuously learn will be essential for success in this role. To qualify for this position, you should have at least 5 years of experience in data engineering, with a focus on Python programming, data pipeline development, and API design. Proficiency in SQL, hands-on experience with Docker, and familiarity with various relational and NoSQL databases are required. Strong knowledge of data warehousing concepts, ETL processes, and data modeling techniques is crucial, along with excellent problem-solving skills and attention to detail. Experience with cloud-based data storage and processing platforms like AWS, GCP, or Azure is preferred. Bonus skills such as being a GenAI prompt engineer, proficiency in Machine Learning technologies like TensorFlow or PyTorch, knowledge of big data technologies, and experience with data visualization tools like Tableau, Power BI, or Looker will be advantageous. Familiarity with Pandas, Spacy, NLP libraries, agile development methodologies, and optimizing data pipelines for costs and performance are also desirable. Effective communication and collaboration skills in English are essential for interacting with technical and non-technical stakeholders. You should be able to translate complex ideas into simple examples to ensure clear understanding among team members. A bachelor's degree in computer science, IT, engineering, or a related field is required, along with relevant certifications in BI, AI, data engineering, or data visualization tools. The role will be based at The Leela Office on Airport Road, Kodihalli, Bangalore, with a hybrid work schedule allowing you to work from the office on Tuesdays, Wednesdays, Thursdays, and from home on Mondays and Fridays. If you are passionate about turning complex data into valuable insights and have experience in mentoring junior members and collaborating with peers, we encourage you to apply for this exciting opportunity.,
Posted 1 day ago
7.0 - 11.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Senior AI Developer/ AI Architect in the AI team, you will have the opportunity to collaborate with and mentor a team of developers. Your primary focus will be on the Fusion AI Team and its AI engine AI Talos, where you will work with Large language models, simulations, and Agentic AI to deliver cutting-edge AI capabilities in the service management space. Your responsibilities will include developing intricate python-based AI code to ensure the successful delivery of advanced AI functionalities. Additionally, you will play a crucial role in team mentoring, guiding junior/mid-level developers in managing their workload efficiently and ensuring tasks are completed according to the product roadmap. Innovation will be a key aspect of your role, where you will lead the team in staying updated on the latest AI trends, especially focusing on large language models and simulations. Furthermore, you will be responsible for software delivery to customers while adhering to standard security practices. To qualify for this role, you should possess a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Being Agile trained and practiced is also essential for this position. The ideal candidate will have at least 7 years of experience in developing AI/Data Science solutions, with a senior level of involvement. Proficiency in Python and its libraries such as Pydantic, Pytorch, Pyarrow, Scikit, Hugging Face, and Pandas is required. Extensive knowledge of AI models and usage, including Llama2, Mistral AI, training models for classification, and RAG architecture, is necessary. Experience as a full-stack developer and familiarity with tools like GitHub, Jira, Docker, as well as GPU-based services architecture and setup, are advantageous. In terms of competencies, strong interpersonal and communication skills are essential. You will collaborate with teams across the business to create end-to-end high-value use cases and effectively communicate with senior management regarding requirement deadlines. Your excellent collaboration and leadership skills will ensure that the team remains motivated and is working efficiently towards set targets. If you are ready to take on this challenging role and contribute to the advancement of AI technologies, we encourage you to apply now at Future@fusiongbs.com.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Data Scientist, you will be responsible for analyzing complex data using statistical and machine learning models to derive actionable insights. You will use Python for data analysis, visualization, and working with various technologies such as APIs, Linux OS, databases, big data technologies, and cloud services. Additionally, you will develop innovative solutions for natural language processing and generative modeling tasks, collaborating with cross-functional teams to understand business requirements and translate them into data science solutions. You will work in an Agile framework, participating in sprint planning, daily stand-ups, and retrospectives. Furthermore, you will research, develop, and analyze computer vision algorithms in areas related to object detection, tracking, product identification and verification, and scene understanding, ensuring model robustness, generalization, accuracy, testability, and efficiency. You will also be responsible for writing product or system development code, designing and maintaining data pipelines and workflows within Azure Databricks for optimal performance and scalability, and communicating findings and insights effectively to stakeholders through reports and visualizations. To qualify for this role, you should have a Master's degree in Data Science, Statistics, Computer Science, or a related field. You should have over 5 years of proven experience in developing machine learning models, particularly for time series data within a financial context. Advanced programming skills in Python or R, with extensive experience in libraries such as Pandas, NumPy, and Scikit-learn are required. Additionally, you should have comprehensive knowledge of AI and LLM technologies, with a track record of developing applications and models. Proficiency in data visualization tools like Tableau, Power BI, or similar platforms is essential. Exceptional analytical and problem-solving abilities, coupled with meticulous attention to detail, are necessary for this role. Superior communication skills are also required to enable the clear and concise presentation of complex findings. Extensive experience in Azure Databricks for data processing, model training, and deployment is preferred, along with proficiency in Azure Data Lake and Azure SQL Database for data storage and management. Experience with Azure Machine Learning for model deployment and monitoring, as well as an in-depth understanding of Azure services and tools for data integration and orchestration, will be beneficial for this position.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have 3-5 years of experience in writing and debugging intermediate to advance level Python code with a good understanding of concepts related to OOPS, APIs, and SQL Databases. Additionally, you should possess a strong grasp of fundamental basics of Generative AI, large language models (LLMs) pipelines like RAG, Open AI GPT models, and experience in NLP and Langchain. It is essential to be familiar with the AWS environment and services like S3, lambda, Step Functions, CloudWatch, etc. You should also have excellent analytical and problem-solving skills and be capable of working independently as well as collaboratively in a team-oriented environment. An analytical mind and business acumen are also important qualities for this role. You should demonstrate the ability to engage with client stakeholders at multiple levels and provide consultative solutions across different domains. It would be beneficial to have familiarity with Python libraries and frameworks such as Pandas, Scikit-learn, PyTorch, TensorFlow, BERT, GPT, or similar models, along with experience in Deep Learning and machine learning.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Chubb is a world-renowned insurance leader with operations spanning across 54 countries and territories, offering a wide range of commercial and personal insurance solutions. Known for its extensive product portfolio, robust distribution network, exceptional financial stability, and global presence, Chubb is committed to providing top-notch services to its diverse clientele. The parent company, Chubb Limited, is publicly listed on the New York Stock Exchange (NYSE: CB) and is a constituent of the S&P 500 index, boasting a workforce of around 43,000 individuals worldwide. For more information, visit www.chubb.com. Chubb India is embarking on an exciting digital transformation journey fueled by a focus on engineering excellence and analytics. The company takes pride in being officially certified as a Great Place to Work for the third consecutive year, underscoring its culture that nurtures innovation, growth, and collaboration. With a talented team of over 2500 professionals, Chubb India promotes a startup mindset that encourages diverse perspectives, teamwork, and a solution-oriented approach. The organization is dedicated to honing expertise in engineering, analytics, and automation, empowering its teams to thrive in the ever-evolving digital landscape. As a Full Stack Data Scientist within the Advanced Analytics team at Chubb, you will play a pivotal role in developing cutting-edge data-driven solutions using state-of-the-art machine learning and AI technologies. This technical position involves leveraging AI and machine learning techniques to automate underwriting processes, enhance claims outcomes, and provide innovative risk solutions. Ideal candidates for this role possess a solid educational background in computer science, data science, statistics, applied mathematics, or related fields, coupled with a penchant for solving complex problems through innovative thinking while maintaining a keen focus on delivering actionable business insights. You should be proficient in utilizing a diverse set of tools, strategies, machine learning algorithms, and programming languages to address a variety of challenges. Key Responsibilities: - Collaborate with global business partners to identify analysis requirements, manage deliverables, present results, and implement models. - Leverage a wide range of machine learning, text and image AI models to extract meaningful features from structured and unstructured data. - Develop and deploy scalable and efficient machine learning models to automate processes, gain insights, and facilitate data-driven decision-making. - Package and publish codes and solutions in reusable Python formats for seamless integration into CI/CD pipelines and workflows. - Ensure high-quality code that aligns with business objectives, quality standards, and secure web development practices. - Build tools for streamlining the modeling pipeline, sharing knowledge, and implementing real-time monitoring and alerting systems for machine learning solutions. - Establish and maintain automated testing and validation infrastructure, troubleshoot pipelines, and adhere to best practices for versioning, monitoring, and reusability. Qualifications: - Proficiency in ML concepts, supervised/unsupervised learning, ensemble techniques, and various ML models including Random Forest, XGBoost, SVM, etc. - Strong experience with Azure cloud computing, containerization technologies (Docker, Kubernetes), and data science frameworks like Pandas, Numpy, TensorFlow, Keras, PyTorch, and sklearn. - Hands-on experience with DevOps tools such as Git, Jenkins, Sonar, Nexus, along with data pipeline building, debugging, and unit testing practices. - Familiarity with AI/ML applications, Databricks ecosystem, and statistical/mathematical domains. Why Chubb - Join a leading global insurance company with a strong focus on employee experience and a culture that fosters innovation and excellence. - Benefit from a supportive work environment, industry leadership, and opportunities for personal and professional growth. - Embrace a startup-like culture that values speed, agility, ownership, and continuous improvement. - Enjoy comprehensive employee benefits that prioritize health, well-being, learning, and career advancement. Employee Benefits: - Access to savings and investment plans, upskilling opportunities, health and welfare benefits, and a supportive work environment that encourages inclusivity and growth. Join Us: Your contributions are integral to shaping the future at Chubb. If you are passionate about integrity, innovation, and inclusion and ready to make a difference, we invite you to be part of Chubb India's journey. Apply Now: Chubb India Career Page,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for developing clean and modular Python code for scalable data pipelines. Your role will involve using Pandas to drive data transformation and analysis workflows. Additionally, you will be required to integrate with LLM APIs such as OpenAI to build smart document solutions. Building robust REST APIs using FastAPI or Flask for data and document services will be a key aspect of this role. Experience working with Azure cloud services like Functions, Blob, and App Services is necessary. An added bonus would be the ability to integrate with MongoDB and support document workflows. This is a contract-to-hire position based in Gurgaon, with a duration of 6 months and following IST shift timings.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
The 55ip Quant team is seeking a quantitative professional to research, implement, test, and maintain the core algorithms of its technology-enabled investment platform for large investment advisory (RIA) & wealth management firms. As a Research Analyst at JP Morgan Chase within the Asset and Wealth Management and the 55ip Quant team, you will play a crucial role in researching, implementing, testing, and maintaining the core algorithms of the technology-enabled investment platform. This position offers the opportunity to contribute significantly to research projects and grow as an independent researcher. If you have a background in statistical models, software design constructs, and tools, possess strong problem-solving skills, are a motivated team player, and are eager to make a meaningful impact, then this role could be an ideal fit for you. Responsibilities: - Engage in end-to-end research, development, and maintenance of investment algorithms - Contribute to the development and maintenance of optimization models, participate in building the research and development framework - Review investment algorithmic results thoroughly and contribute to the design of the research data platform - Explore datasets for use in new or existing algorithms, engage in agile practices, and collaborate with stakeholders to gather functional requirements - Participate in research and code reviews, adhere to high-quality coding standards and best practices, conduct comprehensive end-to-end unit testing, and offer support during testing and post go-live stages - Drive research innovation through creative and comprehensive experimentation of cutting-edge hardware, advanced analytics, machine learning techniques, and other methodologies - Work in collaboration with technology teams to ensure the alignment of requirements, standards, and integration Required Qualifications: - Experience in a quantitative role - Proficiency in Python, Git, and Jira - A Master's degree in computer science, computational mathematics, or financial engineering - Strong mathematical foundation and practical experience in the finance industry - Proficiency in quantitative, statistical, and machine learning/artificial intelligence techniques and their implementation using Python modules such as Pandas, NumPy, SciPy, SciKit-Learn, etc. - Excellent communication skills (both written and oral) and analytical problem-solving abilities - Strong attention to detail, commitment to delivering high-quality work, and a willingness to learn - Understanding of financial capital markets, various financial instruments (e.g., stocks, ETFs, Mutual Funds), and financial tools (e.g., Bloomberg, Reuters) - Knowledgeable in SQL Preferred Qualifications: - Professional experience with commercial optimizers (e.g., Gurobi, CPLEX) is advantageous - Ability to adapt quickly to time-sensitive requests - Self-motivated, proactive, responsive, with strategic thinking capabilities while also being willing to delve into the specifics and tactics - Understanding of LaTeX and/or RMarkdown would be a plus,
Posted 1 day ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
As an ideal candidate for this role, you should possess in-depth knowledge of Python and have good experience in creating APIs using FastAPI. You should also have exposure to data libraries like Pandas, DataFrame, NumPy, as well as knowledge in Apache open-source components and Apache Spark. Familiarity with Lakehouse architecture and Open table formats is also desirable. Additionally, you should be well-versed in automated unit testing, preferably using PyTest, and have exposure to distributed computing. Experience working in a Linux environment is a must, while working knowledge in Kubernetes would be considered an added advantage. Basic exposure to ML and MLOps would also be advantageous for this role.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Backend Developer (Python/Django): - Strong working experience in Python-based Django, Flask framework. - Experience in developing microservices based design and architecture. - Strong programming knowledge in JavaScript, HTML5, Python, Restful API, gRPC API. - Programming experience & object-oriented concepts in Python. - Knowledge of Python libraries like Numpy, Pandas, Ppen3D, OpenCV, Matplotlib. - Knowledge of MySQL/Postgres/MSSQL database. - Knowledge of 3D geometry. - Knowledge of SSO/OpenID Connect/OAuth authentication protocols. - Working experience with version control systems like GitHub/BitBucket/GitLab. - Familiarity with continuous integration and continuous deployment (CI/CD) pipelines. As a Backend Developer, you will work in the area of Software Engineering, encompassing the development, maintenance, and optimization of software solutions/applications. You will apply scientific methods to analyze and solve software engineering problems, develop and apply software engineering practice and knowledge, exercise original thought and judgement, supervise the technical and administrative work of other software engineers, and build skills and expertise in your discipline to meet standard expectations for the role. Collaboration and teamwork with other software engineers and stakeholders are essential aspects of the role. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world. With a responsible and diverse team of 340,000 members across more than 50 countries, Capgemini leverages its over 55-year heritage to unlock technology's value for clients and address their business needs comprehensively. The Group's services and solutions span strategy, design, engineering, AI, cloud, and data, supported by deep industry expertise and a strong partner ecosystem. In 2023, the Group reported global revenues of 22.5 billion.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
punjab
On-site
We are searching for an experienced Python Developer to become a part of our dynamic development team. The ideal candidate should possess 2 to 5 years of experience in constructing scalable backend applications and APIs using contemporary Python frameworks. This position necessitates a solid foundation in object-oriented programming, web technologies, and collaborative software development. Your responsibilities will involve close collaboration with the design, frontend, and DevOps teams to deliver sturdy and high-performance solutions. Your key responsibilities will include developing, testing, and maintaining backend applications utilizing Django, Flask, or FastAPI. You will also be responsible for building RESTful APIs and incorporating third-party services to enrich platform capabilities. Utilization of data handling libraries such as Pandas and NumPy for efficient data processing is essential. Additionally, writing clean, maintainable, and well-documented code conforming to industry best practices, participating in code reviews, and mentoring junior developers are part of your role. You will collaborate within Agile teams using Scrum or Kanban workflows and troubleshoot and debug production issues proactively and analytically. Required qualifications for this position include 2 to 5 years of backend development experience with Python, proficiency in core and advanced Python concepts, strong command over at least one Python framework (Django, Flask, or FastAPI), experience with data libraries like Pandas and NumPy, understanding of authentication/authorization mechanisms, middleware, and dependency injection, familiarity with version control systems like Git, and comfort working in Linux environments. Must-have skills for this role consist of expertise in backend Python development and web frameworks, experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs), strong debugging, problem-solving, and optimization skills, experience with API development and microservices architecture, and a deep understanding of software design principles and security best practices. Good-to-have skills include exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch), knowledge of containerization tools (Docker, Kubernetes), familiarity with web servers (e.g., Apache, Nginx) and deployment architectures, understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO), familiarity with Agile practices and tools like Jira or Trello, and exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). In return, we offer competitive compensation based on your skills and experience, generous time off with 18 annual holidays to maintain a healthy work-life balance, continuous learning opportunities while working on cutting-edge projects, and valuable experience in client-facing roles to enhance your professional growth.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The Content and Data Analytics team is an integral part of Global Operations at Elsevier, within the DataOps division. The team primarily provides data analysis services using Databricks, catering to product owners and data scientists of Elsevier's Research Data Platform. Your work in this team will directly contribute to the development of cutting-edge data analytics products for the scientific research sector, including renowned products like Scopus and SciVal. As a Data Analyst II, you are expected to possess a foundational understanding of best practices and project execution, with supervision from senior team members. Your responsibilities will include generating basic insights and recommendations within your area of expertise, supporting analytics team members, and gradually taking the lead on low complexity analytics projects. Your role will be situated within DataOps, supporting data scientists working within the Domains of the Research Data Platform. The Domains are functional units responsible for delivering various data products through data science algorithms, presenting you with a diverse range of analytical activities. Tasks may involve delving into extensive datasets to address queries, conducting large-scale data preparation, evaluating data science algorithm metrics, and more. To excel in this role, you must possess a sharp eye for detail, strong analytical skills, and proficiency in at least one data analysis system. Curiosity, dedication to quality work, and an interest in the scientific research realm and Elsevier's products are essential. Effective communication with stakeholders worldwide is crucial, hence a high level of English proficiency is required. Requirements for this position include a minimum of 3 years of work experience, coding proficiency in a programming language (preferably Python) and SQL, familiarity with string manipulation functions like regex, prior exposure to data analysis tools such as Pandas or Apache Spark/Databricks, knowledge of basic statistics relevant to data science, and familiarity with visualization tools like Tableau/Power BI. Furthermore, experience with Agile tools like JIRA is advantageous. Stakeholder management skills are crucial, involving building strong relationships with Data Scientists and Product Managers, aligning activities with their goals, and presenting achievements and project updates effectively. In addition to technical competencies, soft skills like effective collaboration, proactive problem-solving, and a drive for results are highly valued. Key results for this role include understanding task requirements, data gathering and refinement, interpretation of large datasets, reporting findings through effective storytelling, formulating recommendations, and identifying new opportunities. Elsevier promotes a healthy work-life balance with various well-being initiatives, shared parental leave, study assistance, and sabbaticals. The company offers comprehensive health insurance, flexible working arrangements, employee assistance programs, and modern family benefits to support employees" holistic well-being. As a global leader in information and analytics, Elsevier plays a pivotal role in advancing science and healthcare outcomes. Your work with the company contributes to addressing global challenges and fostering a sustainable future through innovative technologies and impactful partnerships. Elsevier is committed to a fair and accessible hiring process. If you require accommodations or adjustments due to a disability or other needs, please notify the company. Furthermore, be cautious of potential scams during your job search and familiarize yourself with the Candidate Privacy Policy for a secure application process. For US job seekers, it's important to know your rights regarding Equal Employment Opportunity laws.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Visualization and Web Development Specialist at Inclusive Minds, you will be a crucial part of the team responsible for transforming intricate electoral data into engaging visual representations that facilitate strategic decision-making for political campaigns. Your role will involve utilizing your expertise in web visualization frameworks like D3.js, Chart.js, Highcharts, or equivalent tools to present data in a user-friendly and impactful manner. Candidates aspiring to join our dynamic data and analytics team should possess a solid technical background encompassing the following key areas: - Proficiency in utilizing web visualization frameworks such as D3.js, Chart.js, Highcharts, or their equivalents to create visually appealing representations of data. - Strong command over SQL with practical experience in managing relational databases (optional). - Competence in programming languages like Python, R, or PHP for data processing, analytics, and visualization purposes. - Familiarity with libraries such as Pandas, NumPy, and Matplotlib for efficient data handling within the Python environment. - Ability to automate data processing workflows and derive actionable insights from complex datasets. - Proficient in HTML, CSS, and JavaScript for constructing and enhancing interactive web-based dashboards. - Knowledge of front-end frameworks like React.js or Vue.js would be advantageous. - Experience in integrating APIs and external data sources into web applications. - Understanding of database optimization techniques for efficient management of extensive electoral datasets. In this role, you will have the opportunity to contribute to the strategic decision-making process of political campaigns by translating data into visually compelling formats. Your expertise in data visualization and web development will be instrumental in shaping impactful electoral campaigns that drive social change and influence public policies positively.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
guwahati, assam
On-site
You are an experienced Software Engineer specializing in Machine Learning with at least 2+ years of relevant experience. In this role, you will be responsible for designing, developing, and optimizing machine learning solutions and data systems. Your proven track record in implementing ML models, building scalable systems, and collaborating with cross-functional teams will be essential in solving complex challenges using data-driven approaches. As a Software Engineer - Machine Learning, your primary responsibilities will include designing and implementing end-to-end machine learning solutions, building and optimizing scalable data pipelines, collaborating with data scientists and product teams, monitoring and optimizing deployed models, staying updated with the latest trends in machine learning, debugging complex issues related to ML systems, and documenting processes for knowledge sharing and clarity. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or related fields. Your technical skills should include a strong proficiency in Python and machine learning libraries such as TensorFlow, PyTorch, or scikit-learn, experience with data processing tools like Pandas, NumPy, and Spark, proficiency in SQL and database systems, hands-on experience with cloud platforms (AWS, GCP, Azure), familiarity with CI/CD pipelines and Git, and experience with model deployment frameworks like Flask, FastAPI, or Docker. Additionally, you should possess strong analytical skills, leadership abilities to guide junior team members, and a proactive approach to learning and collaboration. Preferred qualifications include experience with MLOps tools like MLflow, Kubeflow, or SageMaker, knowledge of big data technologies such as Hadoop, Spark, or Kafka, familiarity with advanced ML techniques like NLP, computer vision, or reinforcement learning, and experience in designing and managing streaming data workflows. Key Performance Indicators for this role include successfully delivering optimized and scalable ML solutions within deadlines, maintaining high model performance in production environments, and ensuring seamless integration of ML models with business applications. Join us in this exciting opportunity to drive innovation and make a significant impact in the field of Machine Learning.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As an Artificial Intelligence (AI) Developer in our team, you will have the exciting opportunity to blend cutting-edge AI techniques with scalable web application architectures to design and build intelligent systems. Working closely with cross-functional teams, including data scientists, software engineers, and product managers, you will develop end-to-end solutions that enhance business operations and deliver exceptional user experiences. Your responsibilities will include full-stack development, where you will architect, design, and develop both frontend and backend components of AI-driven web applications. You will build responsive and user-friendly interfaces using modern JavaScript frameworks such as React, Angular, or Vue.js, along with robust backend services using technologies like Node.js, Django, or .NET. Furthermore, you will be responsible for developing and integrating secure RESTful and GraphQL APIs to connect AI models with cloud services and third-party systems. Leveraging cloud platforms like AWS, Azure, or GCP, you will deploy and manage scalable applications and services effectively. Collaborating with data scientists, you will integrate machine learning models, natural language processing, computer vision, and other AI techniques into web applications. You will optimize AI workflows by ensuring seamless data exchange and efficient model inference across the technology stack. In terms of DevOps and deployment, you will implement CI/CD pipelines, containerization using tools like Docker and Kubernetes, and automated testing to ensure efficient and reliable releases. Monitoring application performance and troubleshooting issues in real-time will be essential to maintain high-quality production environments. Your role will also involve close collaboration with cross-functional teams to gather requirements, deliver project updates, and ensure solutions align with business needs. Documenting development processes, API specifications, and integration practices will be crucial to support future enhancements and maintenance. To excel in this role, you must have a degree in Computer Science, Data Science, Engineering, or a related field. Additionally, you should possess a minimum of 5 years of experience with full-stack development and at least 2 years of experience with AI development. Hands-on experience in programming languages like Python, JavaScript (or TypeScript), and/or C#, along with expertise in front-end and backend frameworks, cloud platforms, containerization, and data management, will be essential. If you are detail-oriented, possess strong problem-solving, analytical, and communication skills, and have a passion for continuous learning and innovation, this role is perfect for you. Join us at the intersection of AI and full-stack web development to create robust, intelligent systems that scale in the cloud while delivering intuitive and responsive user experiences.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
indore, madhya pradesh
On-site
Join Tecnomi's Innovation Team in Indore as an AI/ML Python Developer! In this full-time, onsite role, you will be involved in cutting-edge machine learning projects that require expertise in TensorFlow, PyTorch, and more. Your responsibilities will include designing and implementing machine learning models, developing data processing pipelines, performing feature engineering, and deploying ML models in production environments. Collaboration with cross-functional teams in an agile setting is a key aspect of this position. The ideal candidate will have 3-5 years of Python development experience and a strong background in ML libraries such as TensorFlow, PyTorch, scikit-learn, pandas, and NumPy. Experience with time-series analysis, neural networks, cloud platforms (preferably AWS), and containerization is also required. A Bachelor's or Master's degree in Computer Science, Data Science, or a related field is preferred. Additionally, knowledge in NLP, reinforcement learning, and MLOps (including MLflow and model versioning) is considered a plus. If you are passionate about AI/ML development and eager to contribute to innovative R&D initiatives involving real-time data processing and advanced AI model development, we encourage you to apply for this exciting opportunity at Tecnomi.,
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Sykatiya Technology Pvt Ltd is a leading Semiconductor Industry innovator committed to leveraging cutting-edge technology to solve complex problems. We are currently looking for a highly skilled and motivated Data Scientist to join our dynamic team and contribute to our mission of driving innovation through data-driven insights. As the Lead Data Scientist and Machine Learning Engineer at Sykatiya Technology Pvt Ltd, you will play a crucial role in analyzing large datasets to uncover patterns, develop predictive models, and implement AI/ML solutions. Your responsibilities will include working on projects involving neural networks, deep learning, data mining, and natural language processing (NLP) to drive business value and enhance our products and services. Key Responsibilities: - Lead the design and implementation of machine learning models and algorithms to address complex business problems. - Utilize deep learning techniques to enhance neural network models and enhance prediction accuracy. - Conduct data mining and analysis to extract actionable insights from both structured and unstructured data. - Apply natural language processing (NLP) techniques for advanced text analytics. - Develop and maintain end-to-end data pipelines, ensuring data integrity and reliability. - Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. - Mentor and guide junior data scientists and engineers in best practices and advanced techniques. - Stay updated with the latest advancements in AI/ML, neural networks, deep learning, data mining, and NLP. Technical Skills: - Proficiency in Python and its libraries such as NumPy, pandas, sci-kit-learn, TensorFlow, Keras, and PyTorch. - Strong understanding of machine learning algorithms and techniques. - Extensive experience with neural networks and deep learning frameworks. - Hands-on experience with data mining and analysis techniques. - Proficiency in natural language processing (NLP) tools and libraries like NLTK, spaCy, and transformers. - Proficiency in Big Data Technologies including Sqoop, Hadoop, HDFS, Hive, and PySpark. - Experience with Cloud Platforms such as AWS services like S3, Step Functions, EventBridge, Athena, RDS, Lambda, and Glue. - Strong knowledge of Database Management systems like SQL, Teradata, MySQL, PostgreSQL, and Snowflake. - Familiarity with Other Tools like ExactTarget, Marketo, SAP BO, Agile, and JIRA. - Strong Analytical Skills to analyze large datasets and derive actionable insights. - Excellent Problem-Solving Skills with the ability to think critically and creatively. - Effective Communication Skills and teamwork abilities to collaborate with various stakeholders. Experience: - At least 8 to 12 years of experience in a similar role.,
Posted 1 day ago
0.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be working as an Application Developer at Happiest Minds with a focus on Generative AI technology. With a minimum of 4 years of experience and up to 12 years, you will be responsible for developing applications utilizing your strong programming skills in Python, .NET, or Java. For Python, you should be familiar with frameworks like FastAPI or Flask, and data libraries such as NumPy and Pandas. If you have expertise in .NET, knowledge of ASP.NET and Web API development is required. Proficiency in Java with Spring or Spring Boot is necessary for Java developers. In addition, you should have hands-on experience with at least one of the major cloud platforms and services, such as Azure (Azure App Service, Azure Functions, Azure Storage) or AWS (Elastic Beanstalk, Lambda, S3). Furthermore, you should have practical experience with various databases like Oracle, Azure SQL, SQL Server, Cosmos DB, MySQL, PostgreSQL, or MongoDB. A minimum of 3 months experience in developing Generative AI solutions using any LLMs (Large Language Models) and deploying them on cloud platforms is essential for this role. You will collaborate closely with team members and clients to understand project requirements and effectively translate them into technical solutions. Your problem-solving and analytical skills will be crucial in troubleshooting, debugging, and enhancing solutions to meet project needs effectively.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
The Machine Learning Engineer / Data Scientist (Generative AI & Data Engineering) position based in Gurgaon/NCR requires 3 to 5 years of experience in AI & Data Science Enterprise Delivery. As a Team Manager, your primary responsibility will be to manage client expectations and collaborate with stakeholders to fulfill customer requirements. Your duties will encompass various aspects of Data Science, including developing machine learning models to support recommendation systems and NLP projects. You will be tasked with providing actionable insights to optimize products and services. In Data Engineering, your role involves building and maintaining scalable ETL pipelines, optimizing data storage solutions, and ensuring data accuracy and reliability for analytics. Moreover, you will analyze complex datasets to identify trends and patterns, generating insights that drive strategic decisions and enhance client services. Collaborating with product and service teams, you will translate data needs into technical solutions. You will report to the VP Business Growth and engage with external stakeholders, primarily clients. Your technical skills should include proficiency in Python (Pandas, NumPy), SQL, and Java, along with experience in LLMs, LangChain, Generative AI technologies, ML frameworks (TensorFlow, PyTorch), and data engineering tools (Spark, Kafka). Soft skills such as the ability to work independently or in a team, excellent communication, critical thinking, problem-solving abilities, and a proactive approach in dynamic environments are essential. Academic qualifications entail a Bachelor's Degree in Computer Science, Data Analytics, Engineering, or a related field, with 3 to 5 years of experience in Data Science and Data Engineering. Proficiency in key data engineering concepts and strong communication skills are crucial for success in this role.,
Posted 1 day ago
0.0 - 4.0 years
0 Lacs
chennai, tamil nadu
On-site
As an AI Developer at KritiLabs, you will play a crucial role in various aspects of data analysis and model development. Your responsibilities will include assisting in data analysis, supporting the design and implementation of machine learning models, conducting experiments to evaluate model performance, documenting processes and results, collaborating with cross-functional teams, and engaging in ongoing learning opportunities to enhance your understanding of AI/ML concepts and technologies. To excel in this role, you should have a background in Computer Science, Data Science, Mathematics, Statistics, or a related field. Proficiency in programming languages such as Python or R, along with experience in data manipulation libraries like Pandas and NumPy, is essential. A basic understanding of machine learning concepts and algorithms, including regression, classification, and clustering, is required. Strong analytical skills, problem-solving abilities, attention to detail, and the capacity to work with large datasets are key attributes for success in this position. Effective communication skills, both verbal and written, are crucial for articulating complex concepts clearly. Being a team player is vital to collaborate effectively in a fast-paced and collaborative environment. Any prior experience with AI/ML projects or relevant coursework will be advantageous. Additionally, familiarity with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn will be beneficial. KritiLabs offers a dynamic, innovative, and inclusive work environment that values and celebrates individual contributions. We provide opportunities for growth, competitive benefits including health insurance and retirement plans, and a strong emphasis on maintaining a healthy work-life balance through flexible work arrangements. Join us in Chennai, where you can drive positive change and make a difference while working on cutting-edge projects that challenge conventional thinking and push the boundaries of innovation. English proficiency is mandatory, and knowledge of other languages is an added advantage.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are looking for a highly skilled and experienced Senior Python & ML Engineer with expertise in PySpark, machine learning, and large language models (LLMs). You will play a key role in designing, developing, and implementing scalable data pipelines, machine learning models, and LLM-powered applications. In this role, you will need to have a solid understanding of Python's ecosystem, distributed computing using PySpark, and practical experience in AI optimization. Your responsibilities will include designing and maintaining robust data pipelines with PySpark, optimizing PySpark jobs for efficiency on large datasets, and ensuring data integrity throughout the pipeline. You will also be involved in developing, training, and deploying machine learning models using key ML libraries such as scikit-learn, TensorFlow, and PyTorch. Tasks will include feature engineering, model selection, hyperparameter tuning, and integrating ML models into production systems for scalability and reliability. Additionally, you will research, experiment with, and integrate state-of-the-art Large Language Models (LLMs) into applications. This will involve developing solutions that leverage LLMs for tasks like natural language understanding, text generation, summarization, and question answering. You will also fine-tune pre-trained LLMs for specific business needs and datasets, and explore techniques for prompt engineering, RAG (Retrieval Augmented Generation), and LLM evaluation. Collaboration is key in this role, as you will work closely with data scientists, engineers, and product managers to understand requirements and translate them into technical solutions. You will mentor junior team members, contribute to best practices for code quality, testing, and deployment, and stay updated on the latest advancements in Python, PySpark, ML, and LLMs. Furthermore, you will be responsible for deploying, monitoring, and maintaining models and applications in production environments using MLOps principles. Troubleshooting and resolving issues related to data pipelines, ML models, and LLM applications will also be part of your responsibilities. To be successful in this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. Strong proficiency in Python programming, PySpark, machine learning, and LLMs is essential. Experience with cloud platforms like AWS, Azure, or GCP is preferred, along with strong problem-solving, analytical, communication, and teamwork skills. Nice-to-have skills include familiarity with R and Shiny, streaming data technologies, containerization technologies, MLOps tools, graph databases, and contributions to open-source projects.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
jaipur, rajasthan
On-site
As an AI / ML Engineer, you will be responsible for utilizing your expertise in the field of Artificial Intelligence and Machine Learning to develop innovative solutions. You should hold a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or a related field. With a minimum of 6 years of experience in AI/ML, you are expected to demonstrate proficiency in Python and various ML libraries such as scikit-learn, XGBoost, pandas, NumPy, matplotlib, and seaborn. In this role, you will need a strong understanding of machine learning algorithms and deep learning architectures including CNNs, RNNs, and Transformers. Hands-on experience with TensorFlow, PyTorch, or Keras is essential. You should also have expertise in data preprocessing, feature selection, exploratory data analysis (EDA), and model interpretability. Additionally, familiarity with API development and deploying models using frameworks like Flask, FastAPI, or similar tools is required. Experience with MLOps tools such as MLflow, Kubeflow, DVC, and Airflow will be beneficial. Knowledge of cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML is preferred. Proficiency in version control using Git, CI/CD processes, and containerization with Docker is essential for this role. Bonus skills that would be advantageous include familiarity with NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers), Computer Vision experience using OpenCV or YOLO/Detectron, and knowledge of Reinforcement Learning or Generative AI (GANs, LLMs). Experience with vector databases such as Pinecone or Weaviate, as well as LangChain for AI agent building, is a plus. Familiarity with data labeling platforms and annotation workflows will also be beneficial. In addition to technical skills, you should possess soft skills such as an analytical mindset, strong problem-solving abilities, effective communication, and collaboration skills. The ability to work independently in a fast-paced, agile environment is crucial. A passion for AI/ML and a proactive approach to staying updated with the latest developments in the field are highly desirable for this role.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Specialist, Technical Business Analysis at Fiserv, you will be consulting with project teams and functional units to design important projects or services. You will provide support for existing business systems applications, demonstrating proficiency and leadership skills. Your role will involve efficiently balancing and prioritizing project work, utilizing data to offer insights and actionable recommendations that inform decision-making and strategy development. In this position, your expertise in data analysis, machine learning, and statistical modeling will be crucial. You will uncover insights and develop predictive models to enhance fraud rules. Collaborating with cross-functional teams, you will analyze large volumes of transactional data, identify patterns of fraudulent behavior, and enhance monitoring systems continuously. Effective communication is key, as you will interface with clients, vendors, and business partners periodically. To excel in this role, you will need a Bachelor's degree in Computer Science or Engineering, or relevant work experience. A background in IT development is required, along with a minimum of 5 years of experience in fraud analysis, investigations, or a related field - preferably in financial services or e-commerce. Proficiency in programming languages like Python or R for data analysis and model development, as well as familiarity with SQL for querying and managing relational databases, is essential. Sound knowledge of statistical methods and techniques, including regression analysis, clustering, time series analysis, and classification, is also necessary. Strong analytical skills are a must, enabling you to derive insights from complex datasets. Your ability to effectively communicate technical findings to non-technical stakeholders, work collaboratively in a fast-paced team environment, and present recommendations clearly is critical. Additionally, being self-motivated, proactive, and capable of working both independently and collaboratively is essential for success in this role. It would be advantageous to have knowledge and experience with JIRA, Service Point, and Confluence products. Experience with machine learning frameworks (e.g., scikit-learn, TensorFlow, Keras) and data manipulation libraries (e.g., Pandas, NumPy) would also be beneficial. Thank you for considering employment with Fiserv. To apply, use your legal name and complete the step-by-step profile, attaching your resume. Fiserv is committed to Diversity and Inclusion, and does not accept resume submissions from agencies outside of existing agreements. Be cautious of fraudulent job postings not affiliated with Fiserv, and ensure that communications from Fiserv representatives come from legitimate email addresses.,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
The role of a member in the New Analytics Team based in Hyderabad involves understanding business processes and data to model requirements for creating analytics solutions. You will be responsible for building predictive models and recommendation engines using cutting-edge machine learning techniques to enhance the efficiency and effectiveness of business processes. Your tasks will include churning and analyzing data to identify actionable insights and patterns for business use. Additionally, you will assist the Function Head in data preparation and modeling tasks as required. Collaboration with both business and IT teams is essential for understanding and collecting data. Your responsibilities will also include collecting, collating, cleaning, processing, and transforming large volumes of primarily tabular data comprising numerical, categorical, and some textual information. Applying data preparation techniques such as data filtering, joining, cleaning, missing value imputation, feature extraction, feature engineering, feature selection, dimensionality reduction, feature scaling, and variable transformation will be a part of your routine tasks. You will be expected to apply basic algorithms like linear regression, logistic regression, ANOVA, KNN, various clustering methods, SVM, Naive Bayes, decision trees, principal components, and association rule mining. Additionally, ensemble modeling algorithms like bagging (Random Forest), boosting (GBM, LGBM, XGBoost, CatBoost), time-series modeling, and other state-of-the-art algorithms will also be utilized as required. Your role will involve employing modeling concepts such as hyperparameter optimization, feature selection, stacking, blending, K-fold cross-validation, bias and variance, and combating overfitting. Building predictive models using state-of-the-art machine learning techniques for regression, classification, clustering, recommendation engines, etc., will be a key part of your responsibilities. Furthermore, you will analyze business data to uncover hidden patterns and insights, identify explanatory causes, and make strategic recommendations based on your findings. To excel in this role, you should hold a BE/B. Tech degree in any stream and possess strong expertise in Python libraries like Pandas and Scikit Learn. Proficiency in coding according to the outlined requirements is essential. Experience with Python editors such as PyCharm and/or Jupyter Notebooks is a must, along with the ability to organize code into modules, functions, and/or objects. Knowledge of using ChatGPT for machine learning will be advantageous, while familiarity with basic SQL for querying and Excel for data analysis is necessary. Understanding basics of statistics, including distributions, hypothesis testing, and sampling techniques, is a prerequisite. Experience with Kaggle and familiarity with R are desirable. Ideal candidates will have a minimum of 4 years of experience solving business problems through data analytics, data science, and modeling, with at least 2 years of full-time experience as a data scientist. They should have worked on at least 3 projects involving ML model building that were utilized in production by businesses or other clients. Your primary responsibilities will include spending 35% of your time on data preparation for modeling, 35% on building ML/AI models for various business requirements, 20% on performing custom analytics to provide actionable insights to the business, and 10% on assisting the Function Head in data preparation and modeling tasks as needed. Candidates without familiarity with deep learning algorithms, image processing and classification, and text modeling using NLP techniques will not be considered for selection. For applying to this position, please email your application to careers@isb.edu.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
You are a talented and driven Machine Learning Engineer with 2-5 years of experience, looking to join a dynamic team in Chennai. Your expertise lies in machine learning principles and hands-on experience in building, deploying, and managing ML models in production environments. In this role, you will focus on MLOps practices and orchestration to ensure robust, scalable, and automated ML pipelines. Your responsibilities will include designing, developing, and implementing end-to-end MLOps pipelines for deploying, monitoring, and managing machine learning models in production. You will use orchestration tools such as Apache Airflow, Kubeflow, AWS Step Functions, or Azure Data Factory to automate ML workflows. Implementing CI/CD practices for ML code, models, and infrastructure will be crucial for ensuring rapid and reliable releases. You will also establish monitoring and alerting systems for deployed ML models, optimize performance, troubleshoot and debug issues across the ML lifecycle, and create and maintain technical documentation. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field, along with 2-5 years of professional experience as a Machine Learning Engineer or MLOps Engineer. Your skills should include proficiency in Python and its ML ecosystem, hands-on experience with major cloud platforms and their ML/MLOps services, knowledge of orchestration tools, containerization technologies, CI/CD pipelines, and database systems. Strong problem-solving, analytical, and communication skills are essential for collaborating effectively with Data Scientists, Data Engineers, and Software Developers in an Agile environment.,
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for pandas professionals in India is on the rise as more companies are recognizing the importance of data analysis and manipulation in making informed business decisions. Pandas, a popular Python library for data manipulation and analysis, is a valuable skill sought after by many organizations across various industries in India.
Here are 5 major cities in India actively hiring for pandas roles: 1. Bangalore 2. Mumbai 3. Delhi 4. Hyderabad 5. Pune
The average salary range for pandas professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹12-18 lakhs per annum.
Career progression in the pandas domain typically involves moving from roles such as Junior Data Analyst or Data Scientist to Senior Data Analyst, Data Scientist, and eventually to roles like Tech Lead or Data Science Manager.
In addition to pandas, professionals in this field are often expected to have knowledge or experience in the following areas: - Python programming - Data visualization tools like Matplotlib or Seaborn - Statistical analysis - Machine learning algorithms
Here are 25 interview questions for pandas roles: - What is pandas in Python? (basic) - Explain the difference between Series and DataFrame in pandas. (basic) - How do you handle missing data in pandas? (basic) - What are the different ways to create a DataFrame in pandas? (medium) - Explain groupby() in pandas with an example. (medium) - What is the purpose of pivot_table() in pandas? (medium) - How do you merge two DataFrames in pandas? (medium) - What is the significance of the inplace parameter in pandas functions? (medium) - What are the advantages of using pandas over Excel for data analysis? (advanced) - Explain the apply() function in pandas with an example. (advanced) - How do you optimize performance in pandas operations for large datasets? (advanced) - What is method chaining in pandas? (advanced) - Explain the working of the cut() function in pandas. (medium) - How do you handle duplicate values in a DataFrame using pandas? (medium) - What is the purpose of the nunique() function in pandas? (medium) - How can you handle time series data in pandas? (advanced) - Explain the concept of multi-indexing in pandas. (advanced) - How do you filter rows in a DataFrame based on a condition in pandas? (medium) - What is the role of the read_csv() function in pandas? (basic) - How can you export a DataFrame to a CSV file using pandas? (basic) - What is the purpose of the describe() function in pandas? (basic) - How do you handle categorical data in pandas? (medium) - Explain the role of the loc and iloc functions in pandas. (medium) - How do you perform text data analysis using pandas? (advanced) - What is the significance of the to_datetime() function in pandas? (medium)
As you explore pandas jobs in India, remember to enhance your skills, stay updated with industry trends, and practice answering interview questions to increase your chances of securing a rewarding career in data analysis. Best of luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough