Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What You'll Do SQL Development & Optimization : Write complex and optimized SQL queries, including advanced joins, subqueries, analytical functions, and stored procedures, to extract, manipulate, and analyze large datasets. Data Pipeline Management : Design, build, and support robust data pipelines to ensure timely and accurate data flow from various sources into our analytical platforms. Statistical Data Analysis : Apply a strong foundation in statistical data analysis to uncover trends, patterns, and insights from data, contributing to data-driven decision-making. Data Visualization : Work with various visualization tools (e.g., Google PLX, Tableau, Data Studio, Qlik Sense, Grafana, Splunk) to create compelling dashboards and reports that clearly communicate insights. Web Development Contribution : Leverage your experience in web development (HTML, CSS, jQuery, Bootstrap) to support data presentation layers or internal tools. Machine Learning Collaboration : Utilize your familiarity with ML tools and libraries (Scikit-learn, Pandas, NumPy, Matplotlib, NLTK) to assist in data preparation and validation for machine learning initiatives. Agile Collaboration : Work effectively within an Agile development environment, contributing to sprints and adapting to evolving requirements. Troubleshooting & Problem-Solving : Apply strong analytical and troubleshooting skills to identify and resolve data-related issues Skills Required : Expert in SQL (joins, subqueries, analytics functions, stored procedures) Experience building & supporting data pipelines Strong foundation in statistical data analysis Knowledge of visualization tools : Google PLX, Tableau, Data Studio, Qlik Sense, Grafana, Splunk, etc. Experience in web dev : HTML, CSS, jQuery, Bootstrap Familiarity with ML tools : Scikit-learn, Pandas, NumPy, Matplotlib, NLTK, and more Hands-on with Agile environments Strong analytical & troubleshooting skills Bachelor's in CS, Math, Stats, or equivalent (ref:hirist.tech)
Posted 1 month ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Python Developer Experience: 1 – 3 Years Location: , Mumbai, Maharashtra (Work From Office) Shift Timings: Regular Shift: 10:00 AM – 6:00 PM We are hiring a Python Developer who involves developing and maintaining risk analytics tools and automating reporting processes to support commodity risk management. Key Responsibilities: Develop, test, and maintain Python scripts for data analysis and reporting Write scalable, clean code using Pandas, NumPy, Matplotlib, and OOPS principles Collaborate with risk analysts to implement process improvements Document workflows and maintain SOPs in Confluence Optimize code performance and adapt to evolving business needs Requirements: Strong hands-on experience with Python, Pandas, NumPy, Matplotlib, and OOPS Good understanding of data structures and algorithms Experience with Excel and VBA is an added advantage Exposure to financial/market risk environments is preferred Excellent problem-solving, communication, and documentation skills
Posted 1 month ago
1.0 - 7.0 years
0 Lacs
India
On-site
Data Scientist Experience: 1-7 years Location: Pune (Work From Office) Job Description: Strong background in machine learning (unsupervised and supervised techniques) with significant experience in text analytics/NLP. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc. Strong programming ability in Python with experience in the Python data science ecosystem: Pandas, NumPy, SciPy, scikit-learn, NLTK, etc. Good knowledge of database query languages like SQL and experience with databases (PostgreSQL/MySQL/ Oracle/ MongoDB) Excellent verbal and written communication skills, Excellent analytical and problem-solving skills Degree in Computer Science, Engineering or relevant field is preferred. Proven Experience as Data Analyst or Data Scientist Good To Have: Familiarity with Hive, Pig and Scala. Experience in embeddings, Retrieval Augmented Generation (RAG), Gen AI Experience with Data Visualization Tools like matplotlib, plotly, seaborn, ggplot, etc. Experience with using cloud technologies on AWS/ Microsoft Azure. Job Type: Full-time Benefits: Provident Fund Work Location: In person
Posted 1 month ago
1.0 - 3.0 years
3 Lacs
Nagercoil
On-site
Job Title : Data Scientist Location : [Nagercoil] Job Type : Full-time Experience : 1-3 years (Freshers with good academic background can also apply) Salary : [1.5 - 2.5] LPA About the Role We are seeking a talented and analytical Data Scientist to join our team. You will be responsible for turning raw data into actionable insights that guide strategic decisions and product improvements. Ideal candidates should have strong problem-solving skills, proficiency in data science tools, and a passion for working with data. Key Responsibilities : Analyze large volumes of structured and unstructured data to find patterns and trends Build predictive models and machine learning algorithms Work closely with business teams to identify opportunities for leveraging data Communicate findings effectively through reports, dashboards, and visualizations Perform data cleaning, validation, and preprocessing Develop A/B testing frameworks and analyze test results Collaborate with software engineers and analysts to implement data-driven solutions Required Skills : Strong knowledge of Python or R for data analysis and machine learning Experience with libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch Proficiency in SQL for data querying Hands-on experience with data visualization tools like Power BI, Tableau, or Matplotlib/Seaborn Understanding of statistics, probability, and algorithms Excellent analytical and problem-solving skills Preferred Qualifications : Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or related field Experience with big data tools like Hadoop or Spark is a plus Knowledge of cloud platforms such as AWS, Azure, or Google Cloud Job Types: Full-time, Permanent Pay: Up to ₹25,000.00 per month Schedule: Day shift Work Location: In person
Posted 1 month ago
0 years
0 Lacs
India
Remote
Position: Data Analyst Intern (Full-Time) Company: Lead India Location: Remote Stipend: ₹25,000/month Duration: 1–3 months (Full-Time Internship) About Lead India: Lead India is a forward-thinking technology company that helps businesses make smarter decisions through data. We provide meaningful internship opportunities for emerging professionals to gain real-world experience in data analysis, reporting, and decision-making. Role Overview: We are seeking a Data Analyst Intern to support our data and product teams in gathering, analyzing, and visualizing business data. This internship is ideal for individuals who enjoy working with numbers, identifying trends, and turning data into actionable insights. Key Responsibilities: Analyze large datasets to uncover patterns, trends, and insights Create dashboards and reports using tools like Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and analysis Assist in data cleaning, preprocessing, and validation Collaborate with cross-functional teams to support data-driven decisions Document findings and present insights to stakeholders Skills We're Looking For: Strong analytical and problem-solving skills Basic knowledge of SQL and data visualization tools (Power BI, Tableau, or Excel) Familiarity with Python for data analysis (pandas, matplotlib) is a plus Good communication and presentation skills Detail-oriented with a willingness to learn and grow What You’ll Gain: ₹25,000/month stipend Real-world experience in data analysis and reporting Mentorship from experienced analysts and developers Remote-first, collaborative work environment Potential for a Pre-Placement Offer (PPO) based on performance
Posted 1 month ago
0 years
0 Lacs
India
Remote
Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance
Posted 1 month ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Job Title: Data Science Trainer Location: B-11, SECTOR 2 NOIDA – Work from Office Only Job Type: Full-Time Experience: 0–2 Years Salary: 20k -30k Immediate Joiners Preferred Job Summary: We are looking for a passionate and knowledgeable Data Science Trainer to join our team. This is an in-office role, ideal for individuals who have strong foundational knowledge in data analytics and data science and are eager to train and mentor aspiring professionals. Freshers with solid subject knowledge and a passion for teaching are welcome to apply. ❗ Note: This is a 100% on-site position. Remote work is not available. If you are only looking for remote opportunities, please do not apply. Key Responsibilities: Deliver classroom training sessions on Data Science, Data Analytics, and related tools and technologies. Develop and update training materials, assignments, and projects. Provide hands-on demonstrations and practical exposure using real-time datasets. Clarify students’ doubts, conduct assessments, and provide constructive feedback. Stay updated with the latest trends in data science and analytics. Monitor learner progress and suggest improvements. Required Skills: Strong knowledge in Python , Data Analysis , Statistics , Machine Learning , Pandas , NumPy , and Matplotlib/Seaborn . Understanding of tools such as Jupyter Notebook , Excel , and SQL is a plus. Good communication and presentation skills. Passion for teaching and mentoring students. Eligibility: Graduates in Computer Science, Statistics, Mathematics, or related fields. Freshers with excellent knowledge in Data Science are encouraged to apply. Candidates must be willing to work on-site at our office location. How to Apply: Interested candidates should apply immediately with their updated resume. Shortlisted candidates will be contacted for a personal interview.
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Scientist – OpenCV. Experience: 2–3 Years. Location: Bangalore. Notice Period: Immediate Joiners Only. Job Overview. We are looking for a passionate and driven Data Scientist with a strong foundation in computer vision, image processing, and OpenCV. This role is ideal for professionals with 2–3 years of experience who are excited about working on real-world visual data problems and eager to contribute to impactful projects in a collaborative environment.. Key Responsibilities. Develop and implement computer vision solutions using OpenCV and Python.. Work on tasks including object detection, recognition, tracking, and image/video enhancement.. Clean, preprocess, and analyze large image and video datasets to extract actionable insights.. Collaborate with senior data scientists and engineers to deploy models into production pipelines.. Contribute to research and proof-of-concept projects in the field of computer vision and machine learning.. Prepare clear documentation for models, experiments, and technical processes.. Required Skills. Proficient in OpenCV and image/video processing techniques.. Strong coding skills in Python, with familiarity in libraries such as NumPy, Pandas, Matplotlib.. Solid understanding of basic machine learning and deep learning concepts.. Hands-on experience with Jupyter Notebooks; exposure to TensorFlow or PyTorch is a plus.. Excellent analytical, problem-solving, and debugging skills.. Effective communication and collaboration abilities.. Preferred Qualifications. Bachelor’s degree in computer science, Data Science, Electrical Engineering, or a related field.. Practical exposure through internships or academic projects in computer vision or image analysis.. Familiarity with cloud platforms (AWS, GCP, Azure) is an added advantage.. What We Offer. A dynamic and innovation-driven work culture.. Guidance and mentorship from experienced data science professionals.. The chance to work on impactful, cutting-edge projects in computer vision.. Competitive compensation and employee benefits.. Show more Show less
Posted 1 month ago
5.0 - 9.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Job Summary. ServCrust is a rapidly growing technology startup with the vision to revolutionize India's infrastructure. by integrating digitization and technology throughout the lifecycle of infrastructure projects.. About The Role. As a Data Science Engineer, you will lead data-driven decision-making across the organization. Your. responsibilities will include designing and implementing advanced machine learning models, analyzing. complex datasets, and delivering actionable insights to various stakeholders. You will work closely with. cross-functional teams to tackle challenging business problems and drive innovation using advanced. analytics techniques.. Responsibilities. Collaborate with strategy, data engineering, and marketing teams to understand and address business requirements through advanced machine learning and statistical models.. Analyze large spatiotemporal datasets to identify patterns and trends, providing insights for business decision-making.. Design and implement algorithms for predictive and causal modeling.. Evaluate and fine-tune model performance.. Communicate recommendations based on insights to both technical and non-technical stakeholders.. Requirements. A Ph.D. in computer science, statistics, or a related field. 5+ years of experience in data science. Experience in geospatial data science is an added advantage. Proficiency in Python (Pandas, Numpy, Sci-Kit Learn, PyTorch, StatsModels, Matplotlib, and Seaborn); experience with GeoPandas and Shapely is an added advantage. Strong communication and presentation skills. Show more Show less
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Company: Brakes India is at the forefront of leveraging AI-driven analytics in manufacturing. We are setting up an Advanced Analytics team to drive data-driven decision-making across the organization. About the Role: We are looking for an experienced Senior Data Scientist who will be part of this new Center of Excellence (CoE) and help build a high-performing analytics team. In this role, you will leverage your expertise in data analysis, machine learning, and artificial intelligence to drive insights and develop innovative solutions. You will collaborate with business SMEs and other IT divisions to identify business challenges and implement data-driven strategies that enhance our products and processes, and helps grow our business. Responsibilities: The Senior Data Scientist will Collect, clean, and analyze large datasets to extract meaningful insights. Utilize statistical methods to interpret data and identify trends. Design, develop, and implement machine learning models and algorithms tailored to specific business needs. Optimize models for performance and accuracy. Work closely with stakeholders to define project goals and deliver actionable insights. Stay up-to-date with the latest Al/ML trends, tools, and technologies. Experiment with new approaches to enhance our data science capabilities. Present findings and recommendations to both technical and non-technical audiences. Prepare clear documentation for methodologies and results. Develop metrics to assess the effectiveness of models and solutions. Continuously monitor and improve model performance. Lead the CoE to meet its stated objectives for formulating policies, standards, ethics, tools, technology stack and procedures around the use of AI and ML in the organization, among others. Mentor data scientists and support in organizational skilling in AI. Qualifications: Master’s or bachelor’s degree in computer science, Data Science, Statistics, or a related field. Required Skills: 5-7 years' experience in data science, machine learning, or artificial intelligence, with a strong portfolio of projects. Proficiency in programming languages such as Python, R, or Java, SQL Experience in ML Platforms like Azure Machine Learning, Databricks, Azure Data Factory and with ML libraries (e.g., TensorFlow, PyTorch, scikit-learn). Expertise in working with Big Data technologies (eg Hadoop, Spark) & databases like Azure Data Lake, Delta Lake, Snowflake. Strong in MLOps & Deployment: Model training, versioning, monitoring, and deployment. Strong statistical analysis skills and experience with data visualization tools (e.g., Power BI, Matplotlib, Seaborn). Experience with deep learning frameworks and natural language processing (NLP). Understanding of business process and ability to translate business needs into technical requirements. Experience in AI/ML deployment Manufacturing industry is highly desired. Excellent problem-solving and critical-thinking skills. Strong communication and interpersonal skills. Experience in setting up CoEs or analytics practices is a plus. Why Join Us? Opportunity to be part of an Advanced Analytics CoE setup. Work on cutting-edge AI & ML projects in manufacturing. Direct impact on business-critical decisions and process optimization. Collaborative work culture with a focus on innovation and growth.
Posted 1 month ago
2.0 - 7.0 years
8 - 18 Lacs
Pune, Sonipat
Work from Office
About the Role Overview: Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As Indias first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Engineer + Associate Instructor Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering and Anomaly Detections. Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. Contribute to the academic and research environment of the department and the university. Required Qualifications: A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. Excellent communication and interpersonal skills. Preferred Qualifications: A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: Competitive salary packages aligned with industry standards. Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology We look forward to the possibility of having you join our academic team and help shape the future of tech education!
Posted 1 month ago
3.0 - 4.0 years
4 - 9 Lacs
Hyderābād
On-site
Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 1 month ago
3.0 years
5 - 6 Lacs
Gurgaon
On-site
Gurgaon 4 3+ Years Full Time We are looking for a technically adept and instructionally strong AI Developer with core expertise in Python, Large Language Models (LLMs), prompt engineering, and vector search frameworks such as FAISS, LlamaIndex, or RAG-based architectures. The ideal candidate combines solid foundations in data science, statistics, and machine learning development with a hands-on understanding of ML DevOps, model selection, and deployment pipelines. 3–4 years of experience in applied machine learning or AI development, including at least 1–2 years working with LLMs, prompt engineering, or vector search systems. Core Skills Required: Python: Advanced-level expertise in scripting, data manipulation, and model development LLMs (Large Language Models): Practical experience with GPT, LLaMA, Mistral, or open- source transformer models Prompt Engineering: Ability to design, optimize, and instruct on prompt patterns for various use cases Vector Search & RAG: Understanding of feature vectors, nearest neighbor search, and retrieval-augmented generation (RAG) using tools like FAISS, Pinecone, Chroma, or Weaviate LlamaIndex: Experience building AI applications using LlamaIndex, including indexing documents and building query pipelines Rack Knowledge: Familiarity with RACK architecture, model placement, and scaling on distributed hardware ML / ML DevOps: Knowledge of full ML lifecycle including feature engineering, model selection, training, and deployment Data Science & Statistics: Solid grounding in statistical modeling, hypothesis testing, probability, and computing concepts Responsibilities: Design and develop AI pipelines using LLMs and traditional ML models Build, fine-tune, and evaluate large language models for various NLP tasks Design prompts and RAG-based systems to optimize output relevance and factual grounding Implement and deploy vector search systems integrated with document knowledge bases Select appropriate models based on data and business requirements Perform data wrangling, feature extraction, and model training Develop training material, internal documentation, and course content (especially around Python and AI development using LlamaIndex) Work with DevOps to deploy AI solutions efficiently using containers, CI/CD, and cloud infrastructure Collaborate with data scientists and stakeholders to build scalable, interpretable solutions Maintain awareness of emerging tools and practices in AI and ML ecosystems Preferred Tools & Stack: Languages: Python, SQL ML Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers Vector DBs: FAISS, Pinecone, Chroma, Weaviate RAG Tools: LlamaIndex, LangChain ML Ops: MLflow, DVC, Docker, Kubernetes, GitHub Actions Data Tools: Pandas, NumPy, Jupyter Visualization: Matplotlib, Seaborn, Streamlit Cloud: AWS/GCP/Azure (S3, Lambda, Vertex AI, SageMaker) Ideal Candidate: Background in Data Science, Statistics, or Computing Passionate about emerging AI tech, LLMs, and real-world applications Demonstrates both hands-on coding skills and teaching/instructional abilities Capable of building reusable, explainable AI solutions Location gurgaon sector 49
Posted 1 month ago
0 years
10 - 30 Lacs
Sonipat
Remote
Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Mining Engineer to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: ● Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. ● Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. ● Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detections. ● Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. ● Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). ● Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). ● Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. ● Contribute to the academic and research environment of the department and the university. Required Qualifications: ● A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. ● Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. ● Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). ● Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. ● Excellent communication and interpersonal skills. Preferred Qualifications: ● A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. ● Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. ● Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). ● Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: ● Competitive salary packages aligned with industry standards. ● Access to state-of-the-art labs and classroom facilities. ● To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education! Job Type: Full-time Pay: ₹1,000,000.00 - ₹3,000,000.00 per year Benefits: Food provided Health insurance Leave encashment Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Application Question(s): Are you interested in a full-time time onsite Instructor role? Are you ready to relocate to Sonipat - NCR Delhi? Are you ready to relocate to Pune? Work Location: In person Expected Start Date: 15/07/2025
Posted 1 month ago
1.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Title: Bioinformatician Date: 20 Jun 2025 Job Location: Bangalore Pay Grade Year of Experience: Job Title: Bioinformatician Job Location: Bangalore About Syngene: Syngene ( www.syngeneintl.com ) is an innovation-led contract research, development and manufacturing organization offering integrated scientific services from early discovery to commercial supply. At Syngene, safety is at the heart of everything we do personally and professionally. Syngene has placed safety at par with business performance with shared responsibility and accountability, including following safety guidelines, procedures, and SOPs, in letter and spirit Overall adherence to safe practices and procedures of oneself and the teams aligned Contributing to development of procedures, practices and systems that ensures safe operations and compliance to company’s integrity & quality standards Driving a corporate culture that promotes environment, health, and safety (EHS) mindset and operational discipline at the workplace at all times. Ensuring safety of self, teams, and lab/plant by adhering to safety protocols and following environment, health, and safety (EHS) requirements at all times in the workplace. Ensure all assigned mandatory trainings related to data integrity, health, and safety measures are completed on time by all members of the team including self Compliance to Syngene’ s quality standards at all times Hold self and their teams accountable for the achievement of safety goals Govern and Review safety metrics from time to time We are seeking a highly skilled and experienced computational biologist to join our team. The ideal candidate will have a proven track record in multi-omics data analysis. They will be responsible for integrative analyses and contributing to the development of novel computational approaches to uncover biological insights. Experience: 1-4 years Core Purpose of the Role To support data-driven biological research by performing computational analysis of omics data, and generating translational insights through bioinformatics tools and pipelines. Position Responsibilities Conduct comprehensive analyses of multi-omics datasets, including genomics, transcriptomics, proteomics, metabolomics, and epigenomics. Develop computational workflows to integrate various -omics data to generate inference and hypotheses for testing. Conduct differential expression and functional enrichment analyses. Implement and execute data processing workflows and automate the pipelines with best practices for version control, modularization, and documentation. Apply advanced multivariate data analysis techniques, including regression, clustering, and dimensionality reduction, to uncover patterns and relationships in large datasets. Collaborate with researchers, scientists, and other team members to translate computational findings into actionable biological insights. Educational Qualifications Master’s degree in bioinformatics. Mandatory Technical Skills Programming: Proficiency in Python for data analysis, visualization, and pipeline development. Multi-omics analysis: Proven experience in analyzing and integrating multi-omics datasets. Statistics: Knowledge of probability distributions, correlation analysis, and hypothesis testing. Data visualization: Strong understanding of data visualization techniques and tools (e.g., ggplot2, matplotlib, seaborn). Preferred Machine learning: Familiarity with AI/ML concepts Behavioral Skills Excellent communication skills Objective thinking Problem solving Proactivity Syngene Values All employees will consistently demonstrate alignment with our core values Excellence Integrity Professionalism Equal Opportunity Employer It is the policy of Syngene to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by applicable legislation or local law. In addition, Syngene will provide reasonable accommodations for qualified individuals with disabilities.
Posted 1 month ago
2.0 - 3.0 years
0 Lacs
Greater Chennai Area
On-site
Roles & Responsibilities To impart training and monitor the student life cycle for ensuring standard outcome. Conduct live-in person/virtual classes to train learners on Adv. Excel, Power BI, Python and adv Python libraries such as Numpy,Matplotlib, Pandas,seaborn, SciPy, SQL-MySQL, Data Analysis,basic statistical knowledge. Facilitate and support learners progress/journey to deliver personalized blended learning experience and achieve desired skill outcome Evaluate and grade learners Project Report, Project Presentation and other documents. Mentor learners during support, project and assessment sessions. Develop, validate and implement learning content, curriculum and training programs whenever applicable Liaison and support respective teams with schedule planning, learner progress, academic evaluation, learning management, etc Desired profile: 2-3 year of technical training exp in a corporate, or any ed-tech institute. (Not to source college lecturer, school teacher profile) Must be proficient in Adv. Excel, Power BI, Python and adv Python libraries such as Numpy. Matplotlib, Pandas, SciPy, Seaborn,SQL-MySQL, Data Analysis,basic statistical knowledge. Experience in training in Data Analysis Should have worked in as Data Analyst Must have good analysis or problem-solving skills Must have good communication and delivery skills Good Knowledge of Database (SQL, MySQL) Additional Advantage: Knowledge of Flask, Core Java
Posted 1 month ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models
Posted 1 month ago
0.0 - 4.0 years
4 - 9 Lacs
Hyderabad, Telangana
On-site
Job Title: Senior Python Developer – Trading Systems & Market Data Experience: 3–4 Years Location: Hyderabad, Telangana (On-site) Employment Type: Full-Time About the Role: We are seeking a Senior Python Developer with 3–4 years of experience and a strong understanding of stock market dynamics, technical indicators, and trading systems. You’ll take ownership of backtesting frameworks, strategy optimization, and developing high-performance, production-ready trading modules. The ideal candidate is someone who can think critically about trading logic, handle edge cases with precision, and write clean, scalable, and testable code. You should be comfortable working in a fast-paced, data-intensive environment where accuracy and speed are key. Key Responsibilities: Design and maintain robust backtesting and live trading frameworks. Build modules for strategy development, simulation, and optimization. Integrate with real-time and historical market data sources (e.g., APIs, databases). Use libraries like Pandas, NumPy, TA-Lib, Matplotlib, SciPy, etc., for data processing and signal generation. Apply statistical methods to validate strategies (mean, regression, correlation, standard deviation, etc.). Optimize code for low-latency execution and memory efficiency. Collaborate with traders and quants to implement and iterate on ideas. Use Git and manage codebases with best practices (unit testing, modular design, etc.). Required Skills & Qualifications: 3–4 years of Python development experience, especially in data-intensive environments. Strong understanding of algorithms, data structures, and performance optimization. Hands-on with technical indicators, trading strategy design, and data visualization. Proficient with Pandas, NumPy, Matplotlib, SciPy, TA-Lib, etc. Strong SQL skills and experience working with structured and time-series data. Exposure to REST APIs, data ingestion pipelines, and message queues (e.g., Kafka, RabbitMQ) is a plus. Experience in version control systems (Git) and collaborative development workflows. Preferred Experience: Hands-on experience with trading platforms or algorithmic trading systems. Familiarity with order management systems (OMS), execution logic, or market microstructure. Prior work with cloud infrastructure (AWS, GCP) or Docker/Kubernetes. Knowledge of machine learning or reinforcement learning in financial contexts is a bonus. What You’ll Get: Opportunity to work on real-world trading systems with measurable impact. A collaborative and fast-paced environment. A role where your ideas directly translate to production and trading performance. Job Type: Full-time Pay: ₹400,000.00 - ₹900,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person
Posted 1 month ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are looking for a technically adept and instructionally strong AI Developer with core expertise in Python, Large Language Models (LLMs), prompt engineering, and vector search frameworks such as FAISS, LlamaIndex, or RAG-based architectures. The ideal candidate combines solid foundations in data science, statistics, and machine learning development with a hands-on understanding of ML DevOps, model selection, and deployment pipelines. 3–4 years of experience in applied machine learning or AI development, including at least 1–2 years working with LLMs, prompt engineering, or vector search systems. Core Skills Required Python: Advanced-level expertise in scripting, data manipulation, and model development LLMs (Large Language Models): Practical experience with GPT, LLaMA, Mistral, or open- source transformer models Prompt Engineering: Ability to design, optimize, and instruct on prompt patterns for various use cases Vector Search & RAG: Understanding of feature vectors, nearest neighbor search, and retrieval-augmented generation (RAG) using tools like FAISS, Pinecone, Chroma, or Weaviate LlamaIndex: Experience building AI applications using LlamaIndex, including indexing documents and building query pipelines Rack Knowledge: Familiarity with RACK architecture, model placement, and scaling on distributed hardware ML / ML DevOps: Knowledge of full ML lifecycle including feature engineering, model selection, training, and deployment Data Science & Statistics: Solid grounding in statistical modeling, hypothesis testing, probability, and computing concepts Responsibilities Design and develop AI pipelines using LLMs and traditional ML models Build, fine-tune, and evaluate large language models for various NLP tasks Design prompts and RAG-based systems to optimize output relevance and factual grounding Implement and deploy vector search systems integrated with document knowledge bases Select appropriate models based on data and business requirements Perform data wrangling, feature extraction, and model training Develop training material, internal documentation, and course content (especially around Python and AI development using LlamaIndex) Work with DevOps to deploy AI solutions efficiently using containers, CI/CD, and cloud infrastructure Collaborate with data scientists and stakeholders to build scalable, interpretable solutions Maintain awareness of emerging tools and practices in AI and ML ecosystems Preferred Tools & Stack Languages: Python, SQL ML Frameworks: Scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers Vector DBs: FAISS, Pinecone, Chroma, Weaviate RAG Tools: LlamaIndex, LangChain ML Ops: MLflow, DVC, Docker, Kubernetes, GitHub Actions Data Tools: Pandas, NumPy, Jupyter Visualization: Matplotlib, Seaborn, Streamlit Cloud: AWS/GCP/Azure (S3, Lambda, Vertex AI, SageMaker) Ideal Candidate Background in Data Science, Statistics, or Computing Passionate about emerging AI tech, LLMs, and real-world applications Demonstrates both hands-on coding skills and teaching/instructional abilities Capable of building reusable, explainable AI solutions Location gurgaon sector 49 APPLY NOW
Posted 1 month ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Title : Lead Data Scientist/ML Engineer (5+ years & above) Required Technical Skillset : (GenAI) Language : Python Framework : Scikit-learn, TensorFlow, Keras, PyTorch, Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis And Visualization Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication And Collaboration A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech)
Posted 1 month ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Python Software Development Sr.Analyst Job Description In these roles, you will be responsible for: Design, implement, and test generative AI models using python and various frameworks such as Pandas, TensorFlow, PyTorch, and OpenAI. Research and explore new techniques and applications of generative AI, such as text, image, audio, and video synthesis, style transfer, data augmentation, and anomaly detection. Collaborate with other developers, researchers, and stakeholders to deliver high-quality and innovative solutions. Document and communicate the results and challenges of generative AI projects. Required Skills for this role include: Technical skills 3 + years Experience in developing Python frameworks such DL, ML, Flask At least 2 years of experience in developing generative AI models using python and relevant frameworks. Good knowledge in RPA Strong knowledge of machine learning, deep learning, and generative AI concepts and algorithms. Proficient in python and common libraries such as numpy, pandas, matplotlib, and scikit-learn. Familiar with version control, testing, debugging, and deployment tools. Excellent communication and problem-solving skills. Curious and eager to learn new technologies and domains. Desired Skills: Knowledge of Django, Web API Proficient exposure on MVC.
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS! TCS IS HIRING FOR Azure Data Engineer Technical Skill Set - Pyspark, Azure Data Factory, Azure Data Bricks. Desired Experience Range 4-8 years Required Competencies: 1) Strong design and data solutioning skills 2) Pyspark hands-on experience with complex transformations and large dataset handling experience 3) Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, a. Object oriented and functional programming b. NumPy, Pandas, Matplotlib, requests, pytest c. Jupyter, PyCharm and IDLE d. Conda and Virtual Environment 4) Working experience must with Hive, HBase or similar 5) Azure Skills a. Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases b. Azure DevOps c. Azure AD Integration, Service Principal, Pass-thru login etc. d. Networking – vnet, private links, service connections, etc. e. Integrations – Event grid, Service Bus etc. 6) Database skills a. Oracle, Postgres, SQL Server – any one database experience b. Oracle PL/SQL or T-SQL experience Data modelling
Posted 1 month ago
6.0 - 9.5 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. Those in software engineering at PwC will focus on developing innovative software solutions to drive digital transformation and enhance business performance. In this field, you will use your knowledge to design, code, and test cutting-edge applications that revolutionise industries and deliver exceptional user experiences. Position Title : Full Stack Lead Developer Experience : 6-9.5 Years Job Overview We are seeking a highly skilled and versatile polyglot Full Stack Developer with expertise in modern front-end and back-end technologies, cloud-based solutions, AI/ML and Gen AI. The ideal candidate will have a strong foundation in full-stack development, cloud platforms (preferably Azure), and hands-on experience in Gen AI, AI and machine learning technologies. Key Responsibilities Develop and maintain web applications using Angular/React.js, .NET, and Python. Design, deploy, and optimize Azure native PaaS and SaaS services, including but not limited to Function Apps, Service Bus, Storage Accounts, SQL Databases, Key vaults, ADF, Data Bricks and REST APIs with Open API specifications. Implement security best practices for data in transit and rest. Authentication best practices – SSO, OAuth 2.0 and Auth0. Utilize Python for developing data processing and advanced AI/ML models using libraries like pandas, NumPy, scikit-learn and Langchain, Llamaindex, Azure OpenAI SDK Leverage Agentic frameworks like Crew AI, Autogen etc. Well versed with RAG and Agentic Architecture. Strong in Design patterns – Architectural, Data, Object oriented Leverage azure serverless components to build highly scalable and efficient solutions. Create, integrate, and manage workflows using Power Platform, including Power Automate, Power Pages, and SharePoint. Apply expertise in machine learning, deep learning, and Generative AI to solve complex problems. Primary Skills Proficiency in React.js, .NET, and Python. Strong knowledge of Azure Cloud Services, including serverless architectures and data security. Experience with Python Data Analytics libraries: pandas NumPy scikit-learn Matplotlib Seaborn Experience with Python Generative AI Frameworks: Langchain LlamaIndex Crew AI AutoGen Familiarity with REST API design, Swagger documentation, and authentication best practices. Secondary Skills Experience with Power Platform tools such as Power Automate, Power Pages, and SharePoint integration. Knowledge of Power BI for data visualization (preferred). Preferred Knowledge Areas – Nice To Have In-depth understanding of Machine Learning, deep learning, supervised, un-supervised algorithms. Qualifications Bachelor's or master's degree in computer science, Engineering, or a related field. 6~12 years of hands-on experience in full-stack development and cloud-based solutions. Strong problem-solving skills and ability to design scalable, maintainable solutions. Excellent communication and collaboration skills.
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Leading the development and implementation of advanced reports and dashboard solutions to support business objectives. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What we’re looking for… You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree or six or more years of work experience Six or more years of relevant work experience Experience in managing a team of data scientists that supports a business function. Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc. A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Leading the development and implementation of advanced reports and dashboard solutions to support business objectives. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What we’re looking for… You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree or six or more years of work experience Six or more years of relevant work experience Experience in managing a team of data scientists that supports a business function. Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc. A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France