Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 3.0 years
1 - 3 Lacs
Pitampura
On-site
IMMEDIATE HIRING FOR THE POSITION OF TRAINER ( DATA SCIENCE / DATA ANAYLST / BUSINESS ANALYST /) (ONSITE ONLY - DELHI) Note: Before applying, pls check the requirement thoroughly. https://maps.app.goo.gl/XZxhaf4dT4JDMSnv6 https://www.veridicaltechnologies.com/ Pref. for Delhi Candidates and those who would like to relocate in Delhi immediately. Good knowledge & experience in Training of the following: Machine learning and Deep learning Mathematics Statistics, Probability, Calculus and Linear Algebra Python, R, SAS, SQL, SciPy Stack NumPy, pandas and matplotlib Tableau, Microsoft Excel, Power BI, MySQL, MongoDB, Oracle Good Communication skills ACADEMIC QUALIFICATIONS: M.TECH (IT/CS) / MCA / B.TECH (IT/CS) EXPERIENCE : MINIMUM 2 TO 3 YEARS IN TRAINING ONLY CONTACT AT 93195 93915 VERIDICAL TECHNOLOGIES AGGARWAL PRESITGE MALL 512, 5TH FLOOR, RANI BAGH, PITAMPURA DELHI-110034 LANDMARK : M2K PITAMPURA NEARSET METRO STATIONN: KOHAT ENCLAVE OR SHAKURPUR Job Type: Full-time Pay: ₹10,747.19 - ₹30,000.00 per month Schedule: Day shift Supplemental Pay: Performance bonus Experience: total work: 1 year (Preferred) Work Location: In person
Posted 1 month ago
5.0 - 6.0 years
5 - 10 Lacs
India
On-site
Job Summary: We are seeking a highly skilled Python Developer to join our team. Key Responsibilities: Design, develop, and deploy Python applications Work independently on machine learning model development, evaluation, and optimization. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 07/07/2025
Posted 1 month ago
5.0 years
5 - 6 Lacs
Bengaluru
On-site
Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Responsibilities: Research, design, develop, implement and test econometric, statistical, optimization and machine learning models. Design, write and test modules for Nielsen analytics platforms using Python, R, SQL and/or Spark. Utilize advanced computational/statistics libraries including Spark MLlib, Scikit-learn, SciPy, StatsModels. Collaborate with cross functional Data Science, Product, and Technology teams to integrate best practices from across the organization Provide leadership and guidance for the team in the of adoption of new tools and technologies to improve our core capabilities Execute and refine the roadmap to upgrade the modeling/forecasting/control functions of the team to improve upon the core service KPI’s Ensure product quality, stability, and scalability by facilitating code reviews and driving best practices like modular code, unit tests, and incorporating CI/CD workflows Explain complex data science (e.g. model-related) concepts in simple terms to non-technical internal and external audiences Qualifications Key Skills: 5+ years of professional work experience in Statistics, Data Science, and/or related disciplines, with focus on delivering analytics software solutions in a production environment Strong programming skills in Python with experience in NumPy, Pandas, SciPy and Scikit-learn. Hands-on experience with deep learning frameworks (PyTorch, TensorFlow, Keras). Solid understanding of Machine learning domains such as Computer Vision, Natural Language Processing and classical Machine Learning. Proficiency in SQL and NoSQL databases for large-scale data manipulation Experience with cloud-based ML services (AWS SageMaker, Databricks, GCP AI, Azure ML). Knowledge of model deployment (FastAPI, Flask, TensorRT, ONNX) MLOps tools (MLflow, Kubeflow, Airflow) and containerization. Preferred skills: Understanding of LLM fine-tuning, tokenization, embeddings, and multimodal learning. Familiarity with vector databases (FAISS, Pinecone) and retrieval-augmented generation (RAG). Familiarity with advertising intelligence, recommender systems, and ranking models. Knowledge of CI/CD for ML workflows, and software development best practices. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. People who are looking to extend artificial intelligence into unexplored areas. Your primary focus will be in applying deep learning and artificial intelligence techniques to the domain of medical image analysis. Responsibilities Selecting features, building and optimizing classifier engines using deep learning techniques. Understanding the problem and applying the suitable image processing techniques Use techniques from artificial intelligence/deep learning to solve supervised and unsupervised learning problems. Understanding and designing solutions for complex problems related to medical image analysis by using Deep Learning/Object Detection/Image Segmentation. Recommend and implement best practices around the application of statistical modeling. Create, train, test, and deploy various neural networks to solve complex problems. Develop and implement solutions to fit business problems which may include applying algorithms from a standard statistical tool, deep learning or custom algorithm development. Understanding the requirements and designing solutions and architecture in accordance with them. Participate in code reviews, sprint planning, and Agile ceremonies to drive high-quality deliverables. Design and implement scalable data science architectures for training, inference, and deployment pipelines. Ensure code quality, readability, and maintainability by enforcing software engineering best practices within the data science team. Optimize models for production, including quantization, pruning, and latency reduction for real-time inference. Drive the adoption of versioing strategies for models, datasets, and experiments (e.g., using MLFlow, DVC). Contribute to the architectural design of data platforms to support large-scale experimentation and production workloads. Skills and Qualifications Strong software engineering skills in Python (or other languages used in data science) with emphasis on clean code, modularity, and testability. Excellent understanding and hands-on of Deep Learning techniques such as ANN, CNN, RNN, LSTM, Transformers, VAEs etc. Must have experience with Tensorflow or PyTorch framework in building, training, testing, and deploying neural networks. Experience in solving problems in the domain of Computer Vision. Knowledge of data, data augmentation, data curation, and synthetic data generation. Ability to understand the complete problem and design the solutions that best fit all the constraints. Knowledge of the common data science and deep learning libraries and toolkits such as Keras, Pandas, Scikit-learn, Numpy, Scipy, OpenCV etc. Good applied statistical skills, such as distributions, statistical testing, regression, etc. Exposure to Agile/Scrum methodologies and collaborative development practices. Experience with the development of RESTful APIs. The knowledge of libraries like FastAPI and the ability to apply it to deep learning architectures is essential. Excellent analytical and problem-solving skills with a good attitude and keen to adapt to evolving technologies. Experience with medical image analysis will be an advantage. Experience designing and building ML architecture components (e.g., feature stores, model registries, inference servers). Solid understanding of software design patterns, microservices, and cloud-native architectures. Expertise in model optimization techniques (e.g., ONNX conversion, TensorRT, model distillation) Education: BE/B Tech MS/M Tech (will be a bonus) Experience: 3-5 Years
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Mumbai
Work from Office
3+ years experience of building software solutions using Python Strong fundamentals of Python like Python Data Layout, Generators, Decorators, File IO, Dynamic Programming, Algorithms etc. Working knowledge of Python Standard Libraries and libraries like any ORM library, numpy, scipy, matplotlib, mlab etc. Knowledge of fundamental design principles to build a scalable application Knowledge of Python web frameworks Working knowledge of core Java is added plus Knowledge of web technologies (HTTP, JS) is added plus A financial background will be added plus Any technical capabilities in the area of big data analytics is also added plus Salary Package: As per the industry standard Preferred Programs: BE or BTech or equivalent degree with strong Mathematics and Statistics foundation (example, B.Sc. or M.Sc. in Mathematics & Computer Science)
Posted 1 month ago
0.0 - 1.0 years
0 - 0 Lacs
Pitampura, Delhi, Delhi
On-site
IMMEDIATE HIRING FOR THE POSITION OF TRAINER ( DATA SCIENCE / DATA ANAYLST / BUSINESS ANALYST /) (ONSITE ONLY - DELHI) Note: Before applying, pls check the requirement thoroughly. https://maps.app.goo.gl/XZxhaf4dT4JDMSnv6 https://www.veridicaltechnologies.com/ Pref. for Delhi Candidates and those who would like to relocate in Delhi immediately. Good knowledge & experience in Training of the following: Machine learning and Deep learning Mathematics Statistics, Probability, Calculus and Linear Algebra Python, R, SAS, SQL, SciPy Stack NumPy, pandas and matplotlib Tableau, Microsoft Excel, Power BI, MySQL, MongoDB, Oracle Good Communication skills ACADEMIC QUALIFICATIONS: M.TECH (IT/CS) / MCA / B.TECH (IT/CS) EXPERIENCE : MINIMUM 2 TO 3 YEARS IN TRAINING ONLY CONTACT AT 93195 93915 VERIDICAL TECHNOLOGIES AGGARWAL PRESITGE MALL 512, 5TH FLOOR, RANI BAGH, PITAMPURA DELHI-110034 LANDMARK : M2K PITAMPURA NEARSET METRO STATIONN: KOHAT ENCLAVE OR SHAKURPUR Job Type: Full-time Pay: ₹10,747.19 - ₹30,000.00 per month Schedule: Day shift Supplemental Pay: Performance bonus Experience: total work: 1 year (Preferred) Work Location: In person
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the AI Pod: Join our innovative AI Pod focused on revolutionizing performance marketing creatives. Our mission is to build cutting-edge internal tools powered by Artificial Intelligence to automate the creation of high-impact marketing videos. By cutting production time, enabling creatives at scale, and boosting ROI, this pod will directly empower our marketing teams to launch faster, test more rigorously, and drive significant performance improvements across platforms like Meta and Google. Role Summary: We are looking for two skilled and passionate AI Engineers to be foundational members of our AI Pod. You will be responsible for designing, developing, and implementing the core AI and Machine Learning models and systems that power our automated video creation platform. You will work closely with other engineers and a marketing stakeholder to translate creative requirements into intelligent, scalable solutions that deliver against our ambitious goals. Key Responsibilities: Develop and implement AI/ML models and algorithms for various stages of the automated video creation pipeline, including (but not limited to) text-to-video generation, voiceover synthesis, animation sequencing, and content summarization/copy generation. Integrate with and leverage external generative AI APIs and platforms (e.g., Text-to-Video tools like Runway, Pika Labs, Kaiber; Voice Generation tools like ElevenLabs, Play.ht; Language Models like OpenAI, Claude). Design and build data pipelines to process input assets (images, text, product info) and prepare them for AI model consumption. Contribute to the development of the "Smart Creative Engine" (Phase 2), focusing on generating creative variations based on audience, placement, and messaging through intelligent automation. Optimize models and pipelines for performance, scalability, and efficiency to ensure rapid video turnaround times. Collaborate with the fullstack engineer to build robust APIs and backend services that expose AI functionalities to the frontend and potential ad platform integrations. Work closely with the Marketing Stakeholder to understand creative needs, gather feedback, and iterate on AI-generated outputs. Stay up-to-date with the latest advancements in generative AI, computer vision, natural language processing, and automated content creation. Implement best practices for model training, evaluation, deployment, and monitoring. Contribute to code reviews and maintain high code quality standards. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, Artificial Intelligence, Machine Learning, or a related field, or equivalent practical experience. Proven experience developing and deploying AI/ML models in a production environment. Strong programming skills in Python. Proficiency with major AI/ML frameworks such as TensorFlow and/or PyTorch. Experience with libraries like Scikit-Learn, Keras, Matplotlib, SciPy, and potentially OpenCV or Tesseract. Familiarity with generative AI concepts and models (e.g., GANs, Transformers, Diffusion Models). Experience working with and integrating external APIs, particularly those related to AI/ML or content generation. Understanding of data processing, feature engineering, and model evaluation techniques. Ability to work effectively in an Agile, cross-functional team environment. Strong problem-solving skills and a creative approach to technical challenges. Bonus Points (Nice to Have): Experience with Text-to-Video generation techniques or platforms. Experience with Voice Generation (TTS) technologies or platforms. Familiarity with cloud platforms (AWS, GCP, Azure) for model deployment and scaling. Experience with containerization (Docker) and orchestration (Kubernetes). Prior experience in the AdTech or Marketing Technology domain. Experience with workflow automation tools like Zapier or Make. Understanding of video processing techniques. What We Offer: The opportunity to be part of a foundational AI Pod with a clear mission and direct impact on business growth. Work on exciting, cutting-edge applications of generative AI in a real-world marketing context. A fast-paced, collaborative, and innovative work environment. The chance to significantly influence the technical direction and success of the AI Pod.
Posted 1 month ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About the Role: We are seeking a passionate Data Scientist II to join our team. You’ll work on data gathering, exploratory data analysis (EDA), feature engineering, and developing machine learning models. The role involves writing production-level code, designing and deploying algorithms, and maintaining models in live environments. You’ll also manage multiple problems, mentor junior team members, and contribute to patent or publication writing. This is the job you are searching for if you love: ● Data gathering, EDA, feature engineering, and building models ● Writing idiomatic, production-level code and deploying it in live systems ● Designing algorithms using feature engineering, statistical modeling, and ML techniques ● Maintaining and optimizing models in production ● Working on multiple problem statements simultaneously ● Managing stakeholders—both internal and external ● Writing technical documents for patents and publications ● Mentoring junior data scientists You could be the next game changer if you: ● Can define problems based on product or client requirements ● Have strong coding skills—Python preferred ● Are highly proficient in libraries like pandas, numpy, matplotlib, sklearn, seaborn, nltk, scipy, etc. ● Can process data at scale and understand distributed systems ● Have 3–6 years of experience in analytics or data science roles ● Possess strong critical and creative thinking skills ● Have hands-on experience with PyTorch, Keras, and/or TensorFlow ● Bonus: Published patents or papers in national/international peer-reviewed journals or conferences ● Bonus: Experience with Apache Spark, Hadoop, or related tools ● Bonus: Background in consumer app or product domains
Posted 1 month ago
3.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for a Business Intelligence Analyst who can organize and analyze data from various sources and create reports and metrics. He should be able to both deeply analyze data and succinctly explain metrics and insights from it to different stakeholders. Responsibilities As a BI Analyst, you will be responsible for managing various data and metrics crucial to our business operations, including company sales and revenues, customer-level metrics, and product usage metrics. Additionally, you will collaborate with finance teams, assisting in tasks such as invoicing and cash flow management. This role also involves close collaboration with product and operations teams to analyze product metrics and enhance overall product : Manage and analyze company sales, revenues, profits, and other KPIs. Analyse photographer-level metrics, e. g. sales, average basket, conversion rate, and growth. Analyse product usage data and collaborate with the product team to improve product adoption and performance. Perform ad-hoc analysis to support various business initiatives. Assist the finance team in invoicing, cash flow management, and revenue forecasting. Requirements Expert-level proficiency in Excel. Familiarity with one or more data visualization and exploration tools, e. g. Tableau, Kibana, Grafana, etc. Strong analytical and problem-solving skills. Comfortable with analyzing large amounts of data and finding patterns and anomalies. Excellent communication and collaboration abilities. Experience in SQL for data querying and manipulation. Experience in Python for data analysis, e. g. pandas, numpy, scipy, matplotlib, etc. Familiarity with statistical analysis techniques. Experience with basic accounting, e. g. balance sheet, P, and L statements, double-entry accounting, etc. Experience Required : 3- 6 years. (ref:hirist.tech)
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The ATT team, based in Bangalore, is responsible for ensuring that ads are relevant and is of good quality, leading to higher conversion for the sellers and providing a great experience for the customers. We deal with one of the world’s largest product catalog, handle billions of requests a day with plans to grow it by order of magnitude and use automated systems to validate tens of millions of offers submitted by thousands of merchants in multiple countries and languages. In this role, you will build and develop ML models to address content understanding problems in Ads. These models will rely on a variety of visual and textual features requiring expertise in both domains. These models need to scale to multiple languages and countries. You will collaborate with engineers and other scientists to build, train and deploy these models. As part of these activities, you will develop production level code that enables moderation of millions of ads submitted each day. Basic Qualifications 3+ years of building machine learning models for business application experience PhD, or Master's degree and 6+ years of applied research experience Experience programming in Java, C++, Python or related language Experience with neural deep learning methods and machine learning Preferred Qualifications Experience with modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc. Experience with large scale distributed systems such as Hadoop, Spark etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2978163
Posted 1 month ago
1.0 - 7.0 years
0 Lacs
India
On-site
Data Scientist Experience: 1-7 years Location: Pune (Work From Office) Job Description: Strong background in machine learning (unsupervised and supervised techniques) with significant experience in text analytics/NLP. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc. Strong programming ability in Python with experience in the Python data science ecosystem: Pandas, NumPy, SciPy, scikit-learn, NLTK, etc. Good knowledge of database query languages like SQL and experience with databases (PostgreSQL/MySQL/ Oracle/ MongoDB) Excellent verbal and written communication skills, Excellent analytical and problem-solving skills Degree in Computer Science, Engineering or relevant field is preferred. Proven Experience as Data Analyst or Data Scientist Good To Have: Familiarity with Hive, Pig and Scala. Experience in embeddings, Retrieval Augmented Generation (RAG), Gen AI Experience with Data Visualization Tools like matplotlib, plotly, seaborn, ggplot, etc. Experience with using cloud technologies on AWS/ Microsoft Azure. Job Type: Full-time Benefits: Provident Fund Work Location: In person
Posted 1 month ago
2.0 - 3.0 years
10 - 18 Lacs
Chennai
Work from Office
Roles and Responsibilities: Collect and curate data based on specific project requirements. Perform data cleaning, preprocessing, and transformation for model readiness. Select and implement appropriate data models for various applications. Continuously improve model accuracy through iterative learning and feedback loops. Fine-tune large language models (LLMs) for applications such as code generation and data handling. Apply geometric deep learning techniques using PyTorch or TensorFlow. Essential Requirements: Strong proficiency in Python, with experience in writing efficient and clean code. Ability to process and transform natural language data for NLP applications. Solid understanding of modern NLP techniques such as Transformers, Word2Vec, BERT, etc. Strong foundation in mathematics and statistics relevant to machine learning and deep learning. Hands-on experience with Python libraries including NumPy, Pandas, SciPy, Scikit-learn, NLTK, etc. Experience in various data visualization techniques using Python or other tools. Working knowledge of DBMS and fundamental data structures. Familiarity with a variety of ML and optimization algorithms.
Posted 1 month ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Come work on fantastically high-scale systems with us! Blis is an award-winning, global leader and technology innovator in big data analytics and advertising. We help brands such as McDonald's, Samsung, and Mercedes Benz to understand and effectively reach their best audiences. In doing this, we are industry champions in our commitment to the ethical use of data and believe people should have control over their data and privacy. With offices across four continents, we are a truly international company with a global culture and scale. We’re headquartered in the UK, financially successful and stable, and looking forward to continued growth. We’d love it if you could join us on the journey! We are looking for solid and experienced Data Engineers to work on building out secure, automated, scalable pipelines on GCP. We receive over 350gb of data an hour and respond to 400,000 decision requests each second, with petabytes of analytical data to work with. We tackle challenges across almost every major discipline of data science, including classification, clustering, optimisation, and data mining. You will be responsible for building stable production level pipelines maximising the efficiency of cloud compute to ensure that data is properly enabled for operational and scientific cause. This is a growing team with big responsibilities and exciting challenges ahead of it, as we look to reach the next 10x level of scale and intelligence. Our employees are passionate about teamwork and technology and we are looking for someone who wants to make a difference within a growing, successful company. At Blis, Data Engineers are a combination of software engineers, cloud engineers, and data processing engineers. They actively design and build production pipeline code, typically in Python, whilst having practical experience in ensuring, policing, and measuring for good data governance, quality, and efficient consumption. To run an efficient landscape we are ideally looking for candidates that are comfortable with event- driven automation across also aspects of our operational pipelines. As a Blis data engineer, we seek to understand the data and problem definition and find efficient solutions, so critical thinking is a key component to efficient pipelines and effective reuse, this must include defining the pipelines for the correct controls and recovery points not only function and scale. Across the team, everyone supports each other through mentoring, brainstorming, and pairing up. They have a passion for delivering products that delight and astound our customers and that have a long-lasting impact on the business. They do this while also optimising themselves and the team for long-lasting agility, which is often synonymous with practicing Good Engineering. They are almost always adherents of Lean Development and work well in environments with significant amounts of freedom and ambitious goals. Shift : 12 pm - 8 pm (Mon - Fri) Location: Mumbai (Hybrid - 3 days onsite) Key Responsibilities Design, build, monitor, and support large scale data processing pipelines. Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity. Help Blis explore and exploit new data streams to innovative and support commercial and technical growth Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers. This ideal candidate will be comfortable with fast feature delivery with a robust engineered follow up. Skills And Experience 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints. Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques. Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark) Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries Advanced knowledge of cloud based services specifically GCP Excellent working understanding of server-side Linux Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions. Desired Experience optimizing both code and config in Spark, Hive, or similar tools Practical experience working with relational databases, including advanced operations such as partitioning and indexing Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab to analyze, prototype, and visualize data and algorithmic output About Us Blis is the geo-powered advertising tech stack. We’ve built a radically different omnichannel advertising solution structured on geography, not identity. Audience Explorer is our powerful audience planning platform delivering actionable intelligence & insight to advertisers. With Blis, advertisers can plan unified audiences with data from premium partners, connected by geo. Buy audiences using smart cookie less technology that can double performance and halve costs. Measure the audience, not just the channel, with patent-pending omnichannel measurement technology. Established in the UK in 2004, Blis now operates in more than 40 offices across five continents. Working with the world’s largest and most successful companies, as well as every major media agency. As an equal opportunity employer, we treat all our employees and job applicants fairly and equally. We oppose all forms of unlawful and unfair discrimination and take all reasonable steps to create a work environment in which all employees are treated with respect and dignity. We don't condone or tolerate any form of harassment, by employees or by others who do business with us. Our values Brave We're leaders not followers An innovation and growth mindset helps us solve everyday challenges and achieve breakthroughs. Our passion drives us to innovate. We don’t see barriers, just possibilities. We take ownership and hold ourselves accountable for outcomes, good and bad – and we don’t pass the buck. Love our clients We're client obsessed We do what we say and build trusted relationships with our partners for the long term. We act with integrity. We put our clients at the centre of our business. We obsess over the best insights, ideas and solutions to deliver WOW and work with honesty and accountability to get it done Inclusive We're one team We are empathetic and embrace diversity. Everyone has a voice and can bring their authentic self to work. We careabout and support each other – with humility and good humour. Mutual respect and wellbeing are key. We striveto eliminate bias and be open and transparent. Solutions driven We're action oriented Speed matters in business, so we're solution-driven and action-oriented. We value simplification and calculated risk taking. We are lean, agile and resourceful self-starters.We collaborate and break silos, working thoughtfully and with urgency to solve problems, while learning from mistakes and celebrating wins.
Posted 1 month ago
3.0 - 5.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Your tasks. Qualify identified prospects, set appointments, make effective visits, study system requirement specifications. Provide an innovative solution schematic to the reporting Manager & pass critical review. With the help of signal processing algorithms, write a plugin/dll & test on R&S signal sequencer Platforms.. Engage, demonstrate & obtain acceptance from Customer. Your Qualifications. Bachelor / Master in Electronics Engineering & Communications / RF & Microwave. Sound knowledge and relevant experience in Signal Processing & related Tools like Matlab, SciPy. 2-3 Years Experience with Python / C# .net based Software Platform Development for Plugins/Add-ons. Interested?. We are looking forward to receiving your application! Ideally, you should apply online with the reference number. If you have any questions, please feel free to contact your recruiting contact via LinkedIn or XING.. Equal opportunities are important to us. We are looking forward to receiving your application regardless of gender, nationality, ethnic and social origin, religion, ideology, disability, age as well as sexual orientation and identity.. reference number. Recruiting Contact. Rohde & Schwarz is a global technology company with approximately 14,000 employees and three divisions Test & Measurement, Technology Systems and Networks & Cybersecurity. For 90 years, the company has been developing cutting-edge technology, pushing the boundaries of what is technically possible and enabling customers from various sectors such as business, government and public authorities to maintain their technological sovereignty. Rohde & Schwarz is a leading supplier of solutions in the fields of Test and Measurement, Broadcasting, Radio monitoring and Radiolocation as well as Mission-critical Radio communications. For more than 80 years, company has been developing, producing and marketing a wide range of electronic products. Headquarters in Munich with subsidiaries and representatives active in over 70 countries around the world, Rohde & Schwarz has achieved its global presence greatly. In India the company is present as Rohde & Schwarz India Pvt. Ltd (RSINDIA) which is 100% owned subsidiary of Rohde & Schwarz GmbH KG & Co, Germany, whose head office is located in New Delhi and branch offices in Bangalore, Hyderabad, Mumbai and Field presence at Ahmedabad, Chennai and Pune. With more than 10 channel partners situated at key industrial locations we serve across the country. Our emphasis is to provide outstanding sales, service and support to our customers. The company has invested sustainably to increase the local support capability as well as to provide a fully automated Calibration facility for most of the products sold. Rohde & Schwarz India has ISO 9001 2015 certified Quality Management Systems and ISO 17025 NABL Accreditation. The company continuously invests in training its service and sales personnel regularly to maintain a high level of technical competence in preand post-sales support and outstanding quality in services viz. Repairs, Calibration, Product support & Project management. Rohde & Schwarz India is a financially stable company rated by CRSIL as SME 1 for more than 5 years now. This rating is the highest in its category. Rohde & Schwarz India is committed to 100% customer satisfaction through innovative product offerings and outstanding support and services. Our comprehensive and continuously growing range of services are designed to provide customers with the highest level of quality and value throughout the life cycle of our products.. Show more Show less
Posted 1 month ago
10.0 years
3 - 7 Lacs
Hyderābād
On-site
As one of the world’s leading asset managers, Invesco is dedicated to helping investors worldwide achieve their financial objectives. By delivering the combined power of our distinctive investment management capabilities, we provide a wide range of investment strategies and vehicles to our clients around the world. If you're looking for challenging work, smart colleagues, and a global employer with a social conscience, come explore your potential at Invesco. Make a difference every day! Job Description Key Responsibilities / Duties Serves as SME, thought leader, and technical strategy driver for the business capability/product Interfaces with the business, determines their needs, and ensures that the products and services developed are in line with business priorities Assesses the impact of technical solutions on the overall enterprise architecture and the future roadmap Manages the key activities involving technical solution delivery and operations specific to the business capability/product Monitors engineering throughput and recommends practices for reducing cycle time and increasing throughput Keeps abreast of developments in the market for their technical competency or solution that supports their capability/product Monitors technical standards and performance criteria, advising on governance issues and escalating as necessary Focusses on reducing technical debt and increasing the resiliency of the technical platform Experiments with new technologies and acquire new skills to find creative solutions to the unique challenges we will encounter along the way Work Experience / Knowledge: Minimum 10 years of proven experience developing data analytics and visualization software and workflows Experience managing large cross-functional teams Advanced experience with Python and libraries like numpy, pandas, scipy, and matplotlib Advanced database programming experience with both SQL (e.g. Oracle, SQL Server, PostgreSQL, MySQL) and noSQL (e.g. MongoDB, Parquet) data stores. Intermediate experience with data visualization tools (e.g. Plotly, PowerBI, Tableau, Plotly Dash, or RShiny) Basic to intermediate experience with HTML, CSS, React.js, and other front-end technologies. Intermediate to advanced experience with Microsoft Excel Basic to intermediate experience with Linux server administration, containerized environments (Docker or LXC), git, continuous integration (e.g. Jenkins, Travis-CI, or CircleCI), documentation (e.g. Sphinx), IT security, distributed computing, and parallel computation. Basic to Intermediate understanding of Equity, Fixed Income, and Derivative instruments Skills / Other Personal Attributes Required: Comfortable working with ambiguity (e.g. imperfect data, loosely defined concepts, ideas, or goals) and translating these into more tangible outputs Strong analytical and critical thinking skills Self-motivated. Capable of working with little or no supervision Strong written and verbal communication skills Enjoy challenging and thought-provoking work and have a strong desire to learn and progress Ability to manage multiple tasks and requests Must demonstrate a positive, team-focused attitude Ability to react positively under pressure to meet tight deadlines Good inter-personal skills combined with willingness to listen Structured, disciplined approach to work, with attention to detail Flexible – able to meet changing requirements and priorities Maintenance of up-to-date knowledge in the appropriate technical areas Able to work in a global, multicultural environment Formal Education: (minimum requirement to perform job duties) Masters in Statistics, Computer Science or other similar advanced degrees from a top tier educational institution preferred CFA, CPA, CIPM, CAIA, and/or FRM preferred, but not required. Full Time / Part Time Full time Worker Type Employee Job Exempt (Yes / No) Yes Workplace Model At Invesco, our workplace model supports our culture and meets the needs of our clients while providing flexibility our employees value. As a full-time employee, compliance with the workplace policy means working with your direct manager to create a schedule where you will work in your designated office at least three days a week, with two days working outside an Invesco office. Why Invesco In Invesco, we act with integrity and do meaningful work to create impact for our stakeholders. We believe our culture is stronger when we all feel we belong, and we respect each other’s identities, lives, health, and well-being. We come together to create better solutions for our clients, our business and each other by building on different voices and perspectives. We nurture and encourage each other to ensure our meaningful growth, both personally and professionally. We believe in diverse, inclusive, and supportive workplace where everyone feels equally valued, and this starts at the top with our senior leaders having diversity and inclusion goals. Our global focus on diversity and inclusion has grown exponentially and we encourage connection and community through our many employee-led Business Resource Groups (BRGs). What’s in it for you? As an organization we support personal needs, diverse backgrounds and provide internal networks, as well as opportunities to get involved in the community and in the world. Our benefit policy includes but not limited to: Competitive Compensation Flexible, Hybrid Work 30 days’ Annual Leave + Public Holidays Life Insurance Retirement Planning Group Personal Accident Insurance Medical Insurance for Employee and Family Annual Health Check-up 26 weeks Maternity Leave Paternal Leave Adoption Leave Near site Childcare Facility Employee Assistance Program Study Support Employee Stock Purchase Plan ESG Commitments and Goals Business Resource Groups Career Development Programs Mentoring Programs Invesco Cares Dress for your Day In Invesco, we offer development opportunities that help you thrive as a lifelong learner in a constantly evolving business environment and ensure your constant growth. Our AI enabled learning platform delivers curated content based on your role and interest. We ensure our manager and leaders also have many opportunities to advance their skills and competencies that becomes pivotal in their continuous pursuit of performance excellence. To know more about us About Invesco: https://www.invesco.com/corporate/en/home.html About our Culture: https://www.invesco.com/corporate/en/about-us/our-culture.html About our D&I policy: https://www.invesco.com/corporate/en/our-commitments/diversity-and-inclusion.html About our CR program: https://www.invesco.com/corporate/en/our-commitments/corporate-responsibility.html Apply for the role @ Invesco Careers: https://careers.invesco.com/india/
Posted 1 month ago
3.0 - 4.0 years
4 - 9 Lacs
Hyderābād
On-site
Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Noida, Gurugram, Bengaluru
Work from Office
Python\Quant Engineer : Key Responsibilities: Design, develop, and maintain scalable Python-based quantitative tools and libraries. Collaborate with quants and researchers to implement and optimize pricing, risk, and trading models. Process and analyze large datasets (market, fundamental, alternative data) to support research and live trading. Build and enhance backtesting frameworks and data pipelines. Integrate models with execution systems and trading platforms. Optimize code for performance and reliability in low-latency environments. Participate in code reviews, testing, and documentation efforts. Required Qualifications: 5+ years of professional experience in quantitative development or similar roles. Proficiency in Python , including libraries like NumPy, Pandas, SciPy, Scikit-learn , and experience in object-oriented programming. Strong understanding of data structures, algorithms , and software engineering best practices. Experience working with large datasets, data ingestion, and real-time processing. Exposure to financial instruments (equities, futures, options, FX, fixed income, etc.) and financial mathematics. Familiarity with backtesting, simulation , and strategy evaluation tools. Experience with Git , Docker , CI/CD , and modern development workflows. Preferred Qualifications: Preferred Experience with C++ for performance-critical modules. Knowledge of machine learning techniques and tools (e.g., TensorFlow, XGBoost). Familiarity with SQL / NoSQL databases and cloud platforms (AWS, GCP). Prior experience in hedge funds, proprietary trading firms, investment banks , or financial data providers.
Posted 1 month ago
2.0 - 3.0 years
0 Lacs
Greater Chennai Area
On-site
Roles & Responsibilities To impart training and monitor the student life cycle for ensuring standard outcome. Conduct live-in person/virtual classes to train learners on Adv. Excel, Power BI, Python and adv Python libraries such as Numpy,Matplotlib, Pandas,seaborn, SciPy, SQL-MySQL, Data Analysis,basic statistical knowledge. Facilitate and support learners progress/journey to deliver personalized blended learning experience and achieve desired skill outcome Evaluate and grade learners Project Report, Project Presentation and other documents. Mentor learners during support, project and assessment sessions. Develop, validate and implement learning content, curriculum and training programs whenever applicable Liaison and support respective teams with schedule planning, learner progress, academic evaluation, learning management, etc Desired profile: 2-3 year of technical training exp in a corporate, or any ed-tech institute. (Not to source college lecturer, school teacher profile) Must be proficient in Adv. Excel, Power BI, Python and adv Python libraries such as Numpy. Matplotlib, Pandas, SciPy, Seaborn,SQL-MySQL, Data Analysis,basic statistical knowledge. Experience in training in Data Analysis Should have worked in as Data Analyst Must have good analysis or problem-solving skills Must have good communication and delivery skills Good Knowledge of Database (SQL, MySQL) Additional Advantage: Knowledge of Flask, Core Java
Posted 1 month ago
0.0 - 4.0 years
4 - 9 Lacs
Hyderabad, Telangana
On-site
Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 1 month ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Purpose This role includes designing and building AI/ML products at scale to improve customer Understanding & Sentiment analysis, recommend customer requirements, recommend optimal inputs, Improve efficiency of Process. This role will collaborate with product owners and business owners. Key Responsibilities Leading a team of junior and experienced data scientists Lead and participate in end-to-end ML projects deployments that require feasibility analysis, design, development, validation, and application of state-of-the art data science solutions. Push the state of the art in terms of the application of data mining, visualization, predictive modelling, statistics, trend analysis, and other data analysis techniques to solve complex business problems including lead classification, recommender systems, product life-cycle modelling, Design Optimization problems, Product cost & weigh optimization problems. Leverage and enhance applications utilizing NLP, LLM, OCR, image based models and Deep Learning Neural networks for use cases including text mining, speech and object recognition Identify future development needs, advance new emerging ML and AI technology, and set the strategy for the data science team Cultivate a product-centric, results-driven data science organization Write production ready code and deploy real time ML models; expose ML outputs through APIs Partner with data/ML engineers and vendor partners for input data pipes development and ML models automation Provide leadership to establish world-class ML lifecycle management processes Qualifications Job Requirements MTech / BE / BTech / MSc in CS or Stats or Maths Experience Over 10 years of Applied Machine learning experience in the fields of Machine Learning, Statistical Modelling, Predictive Modelling, Text Mining, Natural Language Processing (NLP), LLM, OCR, Image based models, Deep learning and Optimization Expert Python Programmer: SQL, C#, extremely proficient with the SciPy stack (e.g. numpy, pandas, sci-kit learn, matplotlib) Proficiency in work with open source deep learning platforms like TensorFlow, Keras, Pytorch Knowledge of the Big Data Ecosystem: (Apache Spark, Hadoop, Hive, EMR, MapReduce) Proficient in Cloud Technologies and Service (Azure Databricks, ADF, Databricks MLflow) Functional Competencies A demonstrated ability to mentor junior data scientists and proven experience in collaborative work environments with external customers Proficient in communicating technical findings to non-technical stakeholders Holding routine peer code review of ML work done by the team Experience in leading and / or collaborating with small to midsized teams Experienced in building scalable / highly available distribute systems in production Experienced in ML lifecycle mgmt. and ML Ops tools & frameworks
Posted 1 month ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Title : Lead Data Scientist/ML Engineer (5+ years & above) Required Technical Skillset : (GenAI) Language : Python Framework : Scikit-learn, TensorFlow, Keras, PyTorch, Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis And Visualization Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication And Collaboration A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech)
Posted 1 month ago
2.0 - 7.0 years
10 - 15 Lacs
Hyderabad
Work from Office
About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are seeking a highly skilled AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, ie. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Requirements 2 - 4 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
PwC AC is hiring for Data scientist Apply and get a chance to work with one of the Big4 companies #PwC AC. Job Tit le : Data scientist Years of Experienc e: 4+ years Shift Timin gs: 2PM-11PM Qualificati on: Graduate and above(Full time) About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 4 to 9 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI , including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Good to Have Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Soft Skills & Team Expectations Strong written and verbal communication; able to explain complex models to business stakeholders. Ability to independently document work, manage requirements, and self-drive technical discovery. Desire to innovate, improve, and automate existing processes and solutions. Active contributor to team knowledge sharing, technical forums, and innovation drives. Strong interpersonal skills to build relationships across cross-functional teams. A mindset of continuous learning and technical curiosity. Preferred Certifications (at least two are preferred) Certifications in Machine Learning, Deep Learning, or Natural Language Processing. Python programming certifications (e.g., PCEP/PCAP). Cloud certifications (Azure/AWS/GCP) such as Azure AI Engineer, AWS ML Specialty, etc. Why Join PwC CTIO? Be part of a mission-driven AI innovation team tackling industry-wide transformation challenges. Gain exposure to bleeding-edge GenAI research, rapid prototyping, and product development. Contribute to a diverse portfolio of AI solutions spanning pharma, finance, and core business domains. Operate in a startup-like environment within the safety and structure of a global enterprise. Accelerate your career as a deep tech leader in an AI-first future.
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Tasks and Responsibilities: Design and maintain high-quality technical solutions for Risk and Marketing data while collaborating with stakeholders to address emerging business challenges. Identify and resolve data/process issues, anticipate trends and highlight the shortcomings in the data processing steps Develop SAS codes using advanced SAS to execute marketing and risk fraud strategies Develop SQL scripts to be executed during production deployments on SQL server Develop Python notebooks for User Acceptance Testing and System Integration Testing Day to day task include creating python notebooks for data validations, data analysis and data visualization, creating SAS jobs using advanced SAS concepts, creating automated SQL scripts Create python scripts to automate manual data processing tasks, combine multiple code snippets into single python notebooks Required Skills and Qualifications: 3-5 years of experience as SAS, SQL and Python developer with strong hands on experience Good hands-on experience in SAS programming. SAS Certified associates preferred. In-depth knowledge of Python software development, including frameworks, tools, and systems (NumPy, Pandas, SciPy, PyTorch, etc.) Hands on experience in writing SQL queries to fetch data from Microsoft SQL Server Excellent analytical and problem-solving skills Excellent communication skills and client management skills Good to have experience in base SAS/Intermediate SAS
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Leading the development and implementation of advanced reports and dashboard solutions to support business objectives. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What we’re looking for… You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree or six or more years of work experience Six or more years of relevant work experience Experience in managing a team of data scientists that supports a business function. Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc. A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough