Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad
Work from Office
Job Description: We are seeking a talented and experienced Data Scientist to join our dynamic team. The ideal candidate will have a strong background in data analysis, machine learning, statistical modeling, and artificial intelligence. Experience with Natural Language Processing (NLP) is desirable. Experience delivering products that incorporate AI/ML, familiarity with Cloud Services such as AWS highly desirable. Required Skills/Qualifications : - 3-12 years of experience in AI/ML related work - Extensive experience in Python - Familiarity with Statistical models such as Linear/Logistic regression, Bayesian Models, Classification/Clustering models, Time Series analysis - Experience with deep learning models such as CNNs, RNNs, LSTM, Transformers - Experience with machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn, Keras - Experience with GenAI, LLMs, RAG architecture would be a plus - Familiarity with cloud services such as AWS, Azure - Familiarity with version control systems (e.g., Git), JIRA, Confluence - Familiarity with MLOPs concepts, AI/ML pipeline tooling such as Kedro - Knowledge of CI/CD pipelines and DevOps practices - Experience delivering customer facing AI Solutions delivered as SaaS would be a plus - Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. - Strong problem-solving skills and attention to detail - Excellent verbal and written communication and teamwork skills
Posted 1 week ago
8.0 - 14.0 years
15 - 20 Lacs
Bengaluru
Work from Office
- Technical Lead with a total IT experience of 8-12 years. - 2+ years of experience as a Technical Lead. - Strong programming knowledge in one of the following technology areas: 1. Python: Familiarity with frameworks like FastAPI or Flask, along with datalibraries like NumPy and Pandas. 2. .NET: Knowledge of ASP.NET and Web API development. 3. Java: Proficiency with Spring or Spring Boot. - Experience with any one of the following cloud platforms and services: 1. Azure: Azure App Service or Azure Functions, Azure Storage 2. AWS: Elastic Beanstalk, Lambda, S3 - Experience with at least one of the following databases: Oracle, Azure SQL,SQL Server, Cosmos DB, MySQL, PostgreSQL, or MongoDB. - Minimum of 3 months experience in developing GenAI solutions using any LLMsand deploying them on cloud platforms. - Lead, mentor, and manage a team of developers to deliver complex ITsolutions.
Posted 1 week ago
4.0 - 9.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Design and develop Generative AI models for specific business applications. Fine-tune and optimize large language models for performance and accuracy. Build custom pipelines to integrate generative AI into business workflows. Research and implement state-of-the-art techniques in Generative AI. Ensure ethical AI practices and compliance with relevant guidelines. Key Skills: Strong understanding of NLP and Generative AI models like GPT, BERT, or Stable Diffusion. Proficiency in Python and experience with AI frameworks like Hugging Face or OpenAI APIs. Familiarity with prompt engineering and fine-tuning techniques. Excellent collaboration and documentation skills.
Posted 1 week ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Happiest Minds Technologies Pvt.Ltd is looking for Databricks Professional to join our dynamic team and embark on a rewarding career journey Assessing and analyzing client requirements related to data processing, analytics, and machine learning. Designing and developing data pipelines, workflows, and applications using the Data Bricks platform. Integrating and connecting Data Bricks with other data sources, databases, and tools. Developing and implementing machine learning models using libraries such as Scikit-learn, Tensorflow, and PyTorch. Databricks, Spark, Python, Core ML, Pipeline creation,Airflow, Snowflake
Posted 1 week ago
3.0 - 6.0 years
10 - 15 Lacs
Gurugram, Bengaluru
Work from Office
3+ years of experience in data science roles, working with tabular data in large-scale projects. Experience in feature engineering and working with methods such as XGBoost, LightGBM, factorization machines , and similar algorithms. Experience in adtech or fintech industries is a plus. Familiarity with clickstream data, predictive modeling for user engagement, or bidding optimization is highly advantageous. MS or PhD in mathematics, computer science, physics, statistics, electrical engineering, or a related field. Proficiency in Python (3.9+), with experience in scientific computing and machine learning tools (e.g., NumPy, Pandas, SciPy, scikit-learn, matplotlib, etc.). Familiarity with deep learning frameworks (such as TensorFlow or PyTorch) is a plus. Strong expertise in applied statistical methods, A/B testing frameworks, advanced experiment design, and interpreting complex experimental results. Experience querying and processing data using SQL and working with distributed data storage solutions (e.g., AWS Redshift, Snowflake, BigQuery, Athena, Presto, MinIO, etc.). Experience in budget allocation optimization, lookalike modeling, LTV prediction, or churn analysis is a plus. Ability to manage multiple projects, prioritize tasks effectively, and maintain a structured approach to complex problem-solving. Excellent communication and collaboration skills to work effectively with both technical and business teams.
Posted 1 week ago
7.0 - 12.0 years
20 - 25 Lacs
Gurugram
Work from Office
As a Technical Lead, you will be responsible for leading the development and delivery of the platforms. This includes overseeing the entire product lifecycle from the solution until execution and launch, building the right team close collaboration with business and product teams. Primary Responsibilities: Design end-to-end solutions that meet business requirements and align with the enterprise architecture. Define the architecture blueprint, including integration, data flow, application, and infrastructure components. Evaluate and select appropriate technology stacks, tools, and frameworks. Ensure proposed solutions are scalable, maintainable, and secure. Collaborate with business and technical stakeholders to gather requirements and clarify objectives. Act as a bridge between business problems and technology solutions. Guide development teams during the execution phase to ensure solutions are implemented according to design. Identify and mitigate architectural risks and issues. Ensure compliance with architecture principles, standards, policies, and best practices. Document architectures, designs, and implementation decisions clearly and thoroughly. Identify opportunities for innovation and efficiency within existing and upcoming solutions. Conduct regular performance and code reviews, and provide feedback to the development team members to improve professional development. Lead proof-of-concept initiatives to evaluate new technologies. Functional Responsibilities: Facilitate daily stand-up meetings, sprint planning, sprint review, and retrospective meetings. Work closely with the product owner to priorities the product backlog and ensure that user stories are well-defined and ready for development. Identify and address issues or conflicts that may impact project delivery or team morale. Experience with Agile project management tools such as Jira and Trello. Required Skills: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience in software engineering, with at least 3 years in a solution architecture or technical leadership role. Proficiency with AWS or GCP cloud platform. Strong implementation knowledge in JS tech stack, NodeJS, ReactJS, Experience with JS stack - ReactJS, NodeJS. Experience with Database Engines - MySQL and PostgreSQL with proven knowledge of Database migrations, high throughput and low latency use cases. Experience with key-value stores like Redis, MongoDB and similar. Preferred knowledge of distributed technologies - Kafka, Spark, Trino or similar with proven experience in event-driven data pipelines. Proven experience with setting up big data pipelines to handle high volume transactions and transformations. Experience with BI tools - Looker, PowerBI, Metabase or similar. Experience with Data warehouses like BigQuery, Redshift, or similar. Familiarity with CI/CD pipelines, containerization (Docker/Kubernetes), and IaC (Terraform/CloudFormation). Good to Have: Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, TOGAF, etc. Experience setting up analytical pipelines using BI tools (Looker, PowerBI, Metabase or similar) and low-level Python tools like Pandas, Numpy, PyArrow Experience with data transformation tools like DBT, SQLMesh or similar. Experience with data orchestration tools like Apache Airflow, Kestra or similar.
Posted 1 week ago
20.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Staff AI Engineer - MLOps Company: Rapid7 Team: AI Center of Excellence Team Overview: Cross-functional team of Data Scientists and AI Engineers Mission: Leverage AI/ML to protect customer attack surfaces Partners with Detection and Response teams, including MDR Encourages creativity, collaboration, and research publication Uses 20+ years of threat analysis and growing patent portfolio Tech Stack: Cloud/Infra: AWS (SageMaker, Bedrock), EKS, Terraform Languages/Tools: Python, Jupyter, NumPy, Pandas, Scikit-learn ML Focus: Anomaly detection, unlabeled data Role Summary: Build and deploy ML production systems Manage end-to-end data pipelines and ensure data quality Implement ML guardrails and robust monitoring Deploy web apps and REST APIs with strong data security Share knowledge, mentor engineers, collaborate cross-functionally Embrace agile, iterative development Requirements: 8–12 years in Software Engineering (3+ in ML deployment on AWS) Strong in Python, Flask/FastAPI, API development Skilled in CI/CD, Docker, Kubernetes, MLOps, cloud AI tools Experience in data pre-processing, feature engineering, model monitoring Strong communication and documentation skills Collaborative mindset, growth-oriented problem-solving Preferred Qualifications: Experience with Java Background in the security industry Familiarity with AI/ML model operations, LLM experimentation Knowledge of model risk management (drift monitoring, hyperparameter tuning, registries) About Rapid7: Rapid7 is committed to securing the digital world through passion, collaboration, and innovation. With over 10,000 customers globally, it offers a dynamic, growth-focused workplace and tackles major cybersecurity challenges with diverse teams and a mission-driven approach. 4o Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data scientist with strong background in data mining, machine learning, recommendation systems, and statistics. Should possess signature strengths of a qualified mathematician with ability to apply concepts of Mathematics, Applied Statistics, with specialization in one or more of NLP, Computer Vision, Speech, Data mining to develop models that provide effective solution. A strong data engineering background with hands-on coding capabilities is needed to own and deliver outcomes. A Master’s or PhD Degree in a highly quantitative field (Computer Science, Machine Learning, Operational Research, Statistics, Mathematics, etc.) or equivalent experience, 5+ years of industry experience in predictive modelling, data science and analysis, with prior experience in a ML or data scientist role and a track record of building ML or DL models. Responsibilities and skills Work with our customers to deliver a ML/DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models and deploying completed models to deliver business impact to organizations. Selecting features, building and optimizing classifiers using ML techniques. Data mining using state-of-the-art methods, creating text mining pipelines to clean & process large unstructured datasets to reveal high-quality information and hidden insights using machine learning techniques. Should be able to appreciate and work on: Should be able to appreciate and work on Computer Vision problems, for example, extract rich information from images to categorize and process visual data, develop machine learning algorithms for object and image classification, experience in using DBScan, PCA, Random Forests and Multinomial Logistic Regression to select the best features to classify objects. OR Deep understanding of NLP such as fundamentals of information retrieval, deep learning approaches, transformers, attention models, text summarisation, attribute extraction etc. Preferable experience in one or more of the following areas: recommender systems, moderation of user-generated content, sentiment analysis, etc. OR Speech recognition, speech to text and vice versa, understanding NLP and IR, text summarisation, statistical and deep learning approaches to text processing. Experience of having worked in these areas. Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Appreciation for deep learning frameworks like MXNet, Caffe 2, Keras, Tensorflow. Experience in working with GPUs to develop models, handling terabyte-size datasets. Experience with common data science toolkits such as R, Weka, NumPy, MatLab, mlr, mllib, Scikit learn, caret etc – excellence in at least one of these is highly desirable. Should be able to work hands-on in Python, R etc. Should closely collaborate & work with engineering teams to iteratively analyse data using Scala, Spark, Hadoop, Kafka, Storm etc. Experience with NoSQL databases and familiarity with data visualization tools will be of great advantage. What will you experience in terms of culture at Sahaj? A culture of trust, respect and transparency Opportunity to collaborate with some of the finest minds in the industry Work across multiple domains What are the benefits of being at Sahaj? Unlimited leaves Life insurance & private health insurance Stock options No hierarchy Open Salaries Show more Show less
Posted 1 week ago
0 years
0 Lacs
India
On-site
Role: AI Engineer Join AiDP: Revolutionizing Document Automation through AI At AiDP, we're transforming complex document workflows into seamless experiences with powerful AI-driven automation. We're on a mission to redefine efficiency, accuracy, and collaboration in finance, insurance, and compliance. To continue pushing boundaries, we’re looking for exceptional talent Your Mission: Develop, deploy, and optimize cutting-edge machine learning models for accurate extraction and structuring of data from complex documents. Design and implement scalable NLP pipelines to handle vast quantities of unstructured and structured data. Continuously refine models through experimentation and data-driven analysis to maximize accuracy and efficiency. Collaborate closely with product and engineering teams to deliver impactful, real-world solutions. We’re looking for: Proven expertise in NLP, machine learning, and deep learning with solid knowledge of frameworks such as PyTorch, TensorFlow, Hugging Face, or scikit-learn. Strong proficiency in Python and experience with data processing tools (Pandas, NumPy, Dask). Experience deploying models to production using containerization technologies (Docker, Kubernetes) and cloud platforms (AWS, Azure, GCP). Familiarity with version control systems (Git) and continuous integration/continuous deployment (CI/CD) pipelines. Background in computer science, including understanding of algorithms, data structures, and software engineering best practices. Strong analytical thinking, problem-solving skills, and passion for tackling challenging issues in document automation and compliance workflows Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Responsibility: GEN AI is must Products.looking for a data scientist who will help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver AI/ML based Enterprise Software Products. • Develop solutions related to machine learning, natural language processing and deep learning & Generative AI to address business needs. • Your primary focus will be in applying Language/Vision techniques, developing llm based applications and building high quality prediction systems. • Analyze Data: Collaborate with cross-functional teams to understand data requirements and identify relevant data sources. Analyze and preprocess data to extract valuable insights and ensure data quality. • Evaluation and Optimization: Evaluate model performance using appropriate metrics and iterate on solutions to enhance performance and accuracy. Continuously optimize algorithms and models to adapt to evolving business requirements. • Documentation and Reporting: Document methodologies, findings, and outcomes in clear and concise reports. Communicate results effectively to technical and non-technical stakeholders. Work experience background required: • Experience building software from the ground up in a corporate or startup environment. Essential skillsets required: • 3-6 years experience in software development • Educational Background: Strong computer science and Math/Statistics • Experience with Open Source LLM and Langchain Framework and and designing efficient prompt for LLMs. • Proven ability with NLP and text-based extraction techniques. • Experience in Generative AI technologies, such as diffusion and/or language models. • Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. • Familiarity with cloud computing platforms such as GCP or AWS. Experience to deploy and monitor model in cloud environment. • Experience with common data science toolkits, such as NumPy, Pandas etc • Proficiency in using query languages such as SQL • Good applied statistics skills, such as distributions, statistical testing, regression, etc. • Experience working with large data sets along with data modeling, language development, and database technologies • Knowledge in Machine Learning and Deep Learning frameworks (e.g., TensorFlow, Keras, Scikit-Learn, CNTK, or PyTorch), NLP, Recommender systems, personalization, Segmentation, microservices architecture and API development. • Ability to adapt to a fast-paced, dynamic work environment and learn new technologies quickly. • Excellent verbal and written communication skills Show more Show less
Posted 1 week ago
1.0 - 2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Company Profile Mor g an Sta n ley is a lea d ing global fi n anc i al s e rv ic es firm prov i ding a wide ran g e of invest m ent ba n king, s e c u rit i es, inv e st m ent m anage m ent and w e alth m anage m ent serv i c e s . T h e Fir m 's e m ploye e s s e rve clien t s w o rldwi d e inc l u d ing corp o ratio n s , govern m en t s and ind i vid u a l s from m ore than 1,2 0 0 offic e s in 43 co u ntries . As a m arket lead e r, the talent and p a ss i on of our p eo ple is c r itical to our s u cc es s. Togeth e r, we share a co mm on set of val u es r o ot e d in i ntegri t y, excel l e n ce and str o ng team ethic. Mor g an Sta n ley c a n p r o v i de a sup e ri o r fo u ndation for building a pr ofessio n al c a re e r - a pl ac e for p e ople to lea r n, to a c hi e v e and g row. A p hil o sop h y that ba l ances pers on a l lifestyl e s, persp e c tives a nd nee d s is an im portant part of our cult u r e . Department Profile From global institutions to hedge funds, investors come to Morgan Stanley for sales, trading, and market-making services in almost every type of financial instrument in all the world’s financial markets. Morgan Stanley professionals use our network and technology to provide liquidity and sophisticated analysis, to manage risk and execute reliably in the fast-changing markets. Morgan Stanley’s Institutional Equity Division (IED) is a world leader in the origination, distribution and trading of equity, equity-linked and equity-derivative securities. Our broad and deep client relationships, market-leading platform and intellectual insights enable us to be a world-class service provider to our clients for their financing, market access and portfolio management needs. Global Markets Group is the offshoring arm of Morgan Stanley’s Sales & Trading businesses in India. It covers functions across IED ranging from those associated with sales, trading, analytics, strats to risk management. Primary Responsibilities The Morgan Stanley Institutional Equity Division (IED) is a global leader in the origination, distribution and trading of equity, equity-linked and equity-derivative securities. The Quantitative Investment Strategies (QIS) group within IED, creates rules-based investment strategies that provide our clients with a wide range of exposures. We are seeking a highly skilled Data Engineer with at least 1-2 years of experience to join the StratLabs Data team in Mumbai. The successful candidate will be an integral part of the Global StratLabs team, working on data engineering, data pipeline development & other innovative projects. Key Responsibilities Quants : statistical analysis math modelling (optimization methods) python programming financial analysis (portfolio management technique, derivatives) IT data engineering : ETL process (extract, transform, load processes) data storage and processing big data technologies data pipeline development data quality UI/UX/Info graphic : wireframing and prototyping visual and interaction design information architecture (structuring and organizing content) infographic creation Skills Required (essential) Minimum of 1-2 years of experience in data science or a related field. Bachelor’s degree in Statistics/Computer Science/Math/Engineering or related fields. Strong proficiency in Python, Pandas, and Numpy. Strong interest and demonstrated experience in data science and quantitative domains. Comfortable with UNIX/Linux. Excellent problem-solving and analytical skills. Ability to work collaboratively in a team environment. Experience working in a cross-functional teams, bridging technical & design perspectives. Strong communication skills, both written and verbal. Drive and desire to work in an intense team-oriented environment. In addition, below knowledge is not critical but useful for the role. Familiarity with equity or fixed income markets / financial market awareness a plus Knowledge of KDB/Q Experience with visualization and dashboards Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
We're Hiring: Computer Vision Engineer Location: India (Remote/Hybrid) Type: Full-Time | Applied AI R&D Experience : Minimum 4 years Note: This opportunity is for e xperienced professionals only . If you're a fresher or currently pursuing your degree , we appreciate your interest, but this role is not a fit at the moment. Company Overview Weai Labs is an India-based AI/ML research and development company focused on accelerating the transformation of cutting-edge ideas into real-world solutions. We partner with enterprises, research institutions, startups, and universities to build impactful AI products. Our mission is to bridge the gap between innovation and deployment in the AI/ML space. We are looking for a talented and experienced Computer Vision Engineer to join our growing team. This role is ideal for someone passionate about building applied AI systems that make a tangible difference. About the Role: Computer Vision Engineer In this role, you will lead the design and implementation of advanced computer vision models for image and video analysis. You’ll contribute directly to commercial products involving object detection, keypoint-based scoring, biometric estimation, and real-time tracking. Key Responsibilities: Design and develop computer vision models for: Visual attribute estimation (classification/regression tasks) Scoring using keypoint detection and geometric features Object tracking and identification across sequences Build and train deep learning models using PyTorch or TensorFlow Apply advanced techniques including: Object detection (YOLO, Detectron2, etc.) Keypoint estimation (MediaPipe, DeepLabCut, etc.) Similarity learning (Siamese networks or related architectures) Collaborate with engineering teams to define system requirements and integrate AI models into cloud-based platforms Contribute to optimization and deployment pipelines using OpenCV, NumPy, and cloud compute resources Minimum Qualifications: Bachelor’s or Master’s degree in Engineering, Computer Science, or a related field with a focus on Computer Vision or Machine Learning Minimum of 4 years of hands-on experience in deep learning and computer vision Proficiency in Python and experience with frameworks like PyTorch and TensorFlow Solid understanding of object detection, classification, and visual feature extraction Experience with image processing tools such as OpenCV Familiarity with biometric matching or similarity-based recognition systems Preferred Qualifications: Experience building production-ready AI systems in a cloud or SaaS environment Familiarity with keypoint tracking, statistical scoring systems, or visual measurement techniques Exposure to edge or embedded vision systems Domain experience in areas such as medical imaging, agriculture, sports analytics, or wildlife monitoring Why Join Us At Weai Labs, you’ll be part of a mission-driven team dedicated to solving real-world problems with cutting-edge AI. This is an opportunity to work on high-impact projects that integrate science, engineering, and scalable technology. To Apply: Submit your resume or connect with us here on LinkedIn to know more. jobs@weailabs.com +91 8072457947 Just WhatsApp Show more Show less
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Experience: 5-6 years Key Responsibilities Process, analyze, and interpret time-series data from MEMS sensors (e.g., accelerometers, gyroscopes, pressure sensors). Develop and apply statistical methods to identify trends, anomalies, and key performance metrics. Compute and optimize KPIs related to sensor performance, reliability, and drift analysis. Utilize MATLAB toolboxes (e.g., Data Cleaner, Ground Truth Labeler) or Python libraries for data validation, annotation, and anomaly detection. Clean, preprocess, and visualize large datasets to uncover actionable insights. Collaborate with hardware engineers, software developers, and product owners to support end-to-end data workflows. Convert and format data into standardized schemas for use in data pipelines and simulations. Generate automated reports and build dashboards using Power BI or Tableau. Document methodologies, processes, and findings in clear and concise technical reports. Required Qualifications Proficiency in Python or MATLAB for data analysis, visualization, and reporting. Strong foundation in time-series analysis , signal processing, and statistical modeling (e.g., autocorrelation, moving averages, seasonal decomposition). Experience working with MEMS sensors and sensor data acquisition systems. Hands-on experience with pandas, NumPy, SciPy, scikit-learn, and matplotlib . Ability to develop automated KPI reports and interactive dashboards (Power BI or Tableau). Preferred Qualifications Prior experience with data from smartphones, hearables, or wearable devices . Advanced knowledge in MEMS sensor data wrangling techniques. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Exposure to real-time data streaming and processing frameworks/toolboxes. Show more Show less
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position-Azure Data Engineer Location- Hyderabad Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Interview Date- 13-June-25 Interview Mode- Virtual Drive Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Quantitative Analyst: Hillroute Capital About Hillroute: Hillroute Capital is a regulated quantitative hedge fund specializing in global digital asset trading. We leverage sophisticated quantitative methodologies and advanced technology to achieve exceptional risk-adjusted returns. Our transparent approach and diverse, experienced team allow us to excel in the rapidly evolving digital asset market. About the Role: We are seeking a highly skilled Quantitative Analyst to develop, test, and refine systematic trading models across global digital asset markets. This role offers flexibility in approach—candidates with expertise in systematic strategies, options trading, statistical arbitrage, backtesting, or machine learning are equally encouraged to apply. This role will be in the US shift. Key Responsibilities: Strategy Development & Backtesting: Design and rigorously backtest quantitative trading models, ensuring predictive reliability and strong risk management. Quantitative & Statistical Analysis: Apply advanced statistical modeling, econometric analysis, or financial mathematics to extract market insights. Risk Management: Contribute actively to robust risk management frameworks, identifying potential risks and implementing mitigation strategies. Innovation: Regularly generate and test new ideas and strategies, pushing boundaries to enhance fund performance. Preferred Qualifications: 3–5 years experience in quantitative analysis, trading, or research roles within finance. 1-3 years experience in running quantitative machine learning models. Advanced degree in quantitative disciplines (Mathematics, Physics, Statistics, Computer Science, Engineering). Strong Python programming skills (NumPy, Pandas), and familiarity with backtesting frameworks (Backtrader, QuantConnect). Solid knowledge in options pricing, volatility modeling, statistical arbitrage, or systematic strategies. Familiarity with financial data platforms (Bloomberg, Refinitiv, Quandl). Exposure to cloud computing environments (AWS, GCP, Azure). Experience or interest in applying machine learning techniques (XGBoost, TensorFlow, PyTorch) is a plus—but not mandatory. Participation in Kaggle or similar platforms is beneficial but not required. Key Performance Indicators (KPIs): Model profitability and risk-adjusted returns. Backtest reliability and accuracy. Effectiveness in risk management. Contribution to innovation and research quality. What We Offer: Competitive compensation and performance-based incentives. The opportunity to pioneer quantitative strategies in the dynamic digital asset industry. A collaborative, inclusive, and flexible working environment. Professional growth in an innovative, fast-paced hedge fund setting. If you're passionate about quantitative finance and thrive in a dynamic, data-driven environment, we invite you to join our team. Apply directly via LinkedIn: Hillroute Capital Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chandigarh, India
On-site
Experience Required: 4+ Years Key Responsibility: Design, build, and maintain scalable and reliable data pipelines on Databricks, Snowflake, or equivalent cloud platforms. Ingest and process structured, semi-structured, and unstructured data from a variety of sources including APIs, RDBMS, and file systems. Perform data wrangling, cleansing, transformation, and enrichment using PySpark, Pandas, NumPy, or similar libraries. Optimize and manage large-scale data workflows for performance, scalability, and cost-efficiency. Write and optimize complex SQL queries for transformation, extraction, and reporting. Design and implement efficient data models and database schemas with appropriate partitioning and indexing strategies for Data Warehouse or Data Mart. Leverage cloud services (e.g., AWS S3, Glue, Kinesis, Lambda) for storage, processing, and orchestration. Build containerized solutions using Docker and manage deployment pipelines via CI/CD tools such as Azure DevOps, GitHub Actions, or Jenkins. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title : AI/ML Intern Location : Gurgaon, India Employment Type : Internship (Paid) Stipend : As per Industry Standards About Aaizel Tech Labs Aaizel Tech Labs is a pioneering tech startup at the intersection of cybersecurity, AI, geospatial solutions, and more. We are passionate about leveraging technology to develop high-performance products and cutting-edge solutions. As a growing startup, we seek dynamic individuals eager to work on transformative projects in AI and Machine Learning. Role Overview We are looking for a motivated AI/ML Intern to join our data science team. This internship offers hands-on experience with model development, data engineering, and deployment in real-world projects. You will collaborate with experienced professionals and contribute to initiatives spanning predictive analytics, computer vision, and more helping to shape the future of technology at Aaizel Tech Labs. Key Responsibilities 1. Model Development & Optimization • ML Model Implementation: Assist in designing, implementing, and deploying machine learning models for applications like predictive analytics and anomaly detection. • Deep Learning Exposure: Gain experience with deep learning frameworks by working with CNNs, RNNs, and exploring generative models (GANs) on guided projects. • Experimentation: Help run experiments and tune models using basic hyperparameter optimization techniques (Grid Search, etc.). 2. Data Engineering & Preprocessing • Data Preparation: Support the collection, cleaning, and preprocessing of datasets using libraries like Pandas and NumPy. • ETL Assistance: Assist in developing simple ETL pipelines to process data from diverse sources such as IoT sensors or satellite imagery. • Integration: Learn to integrate data from APIs and databases to build comprehensive datasets for analysis. 3. Research & Algorithm Development • Innovation Exposure: Research state-of-the-art machine learning techniques (e.g., Transfer Learning, Transformer models) and assist in applying these to ongoing projects. • Algorithm Exploration: Participate in team discussions to brainstorm new approaches for solving real-world problems in cybersecurity, climate monitoring, or geospatial data analysis. 4. Deployment & MLOps • Deployment Support: Gain hands-on experience deploying models using container technologies like Docker and basic CI/CD pipelines. • Cloud Platforms: Assist in experiments with cloud platforms (AWS, Azure, or GCP) for scalable model serving solutions. • Lifecycle Management: Learn best practices for model versioning, monitoring, and maintenance. 5. Performance Evaluation & Tuning • Model Metrics: Help evaluate model performance using metrics such as F1 Score, AUCROC, and other domain-relevant measures. • Tuning Assistance: Support the process of model tuning through guided experiments and parameter adjustments. 6. Collaboration & Code Quality • Team Integration: Collaborate with data engineers, cybersecurity experts, and geospatial analysts to integrate AI solutions into end-to-end products. • Coding Standards: Contribute to maintaining high-quality codebases by following best practices and using version control (Git). • Documentation: Assist in documenting your work, including model specifications, experiments, and deployment processes. 7. Monitoring & Maintenance • Dashboard Support: Participate in the creation of monitoring dashboards (using tools like Grafana or Prometheus) to track model performance. • Feedback Loops: Help develop feedback mechanisms to retrain models based on real-time data and evolving application needs. Skills & Qualifications Required Qualifications: • Currently pursuing or recently completed a Bachelor’s degree in Computer Science, Data Science, Machine Learning, or a related field. • Proficiency in Python and familiarity with libraries such as Pandas, NumPy, and scikit-learn. • Basic understanding of machine learning algorithms and experience (academic projects or internships) with model development. • Exposure to one or more deep learning frameworks (e.g., TensorFlow, PyTorch) is a plus. • Ability to work collaboratively in a team-oriented environment. • Strong analytical and problem-solving skills, with attention to detail. • Good written and verbal communication skills. Preferred Qualifications: • Familiarity with data visualization tools (e.g., Matplotlib, Seaborn) and basic dashboarding. • Some experience with SQL and NoSQL databases. • Interest in cloud platforms (AWS, Azure, or Google Cloud) and containerization (Docker). • Knowledge of version control systems (Git) and basic CI/CD concepts. • Prior internship or project experience in AI/ML is advantageous. Learning Opportunities • Practical Projects: Work on real-world AI/ML projects that contribute directly to our product development. • Mentorship: Benefit from one-on-one guidance from experienced data scientists and machine learning engineers. • Skill Development: Gain exposure to industry-standard tools, frameworks, and best practices in AI and ML. • Cross-Disciplinary Exposure: Collaborate with experts in cybersecurity, geospatial analysis, and data engineering. • Career Growth: Develop your professional network and acquire skills that could lead to a full-time opportunity. Application Process Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com. Join Aaizel Tech Labs and be part of a team that’s shaping the future of Big Data & AI-driven applications! Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing. As a Senior Data Engineer at Rearc, you will be at the forefront of driving technical excellence within our data engineering team. Your expertise in data architecture, cloud-native solutions, and modern data processing frameworks will be essential in designing workflows that are optimized for efficiency, scalability, and reliability. You'll leverage tools like Databricks, PySpark, and Delta Lake to deliver cutting-edge data solutions that align with business objectives. Collaborating with cross-functional teams, you will design and implement scalable architectures while adhering to best practices in data management and governance . Building strong relationships with both technical teams and stakeholders will be crucial as you lead data-driven initiatives and ensure their seamless execution. What You Bring 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases. Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments. Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows. Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB. In-depth knowledge of data architecture principles and best practices, especially in cloud environments. Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK. Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders. Demonstrated ability to quickly adapt to new tasks and roles in a dynamic environment. What You'll Do Strategic Data Engineering Leadership: Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives. Architect Data Solutions: Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability. Drive Innovation: Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes. Technical Expertise: Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality. Collaboration and Mentorship: Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices. Thought Leadership: Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums. Some More About Us Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together! Show more Show less
Posted 1 week ago
25.0 years
0 Lacs
India
Remote
Opportunities: Full-time remote or work-from-home Day shift, AEST Health Insurance Career Growth About the Role: We are looking for a passionate and motivated individual to join our team as an AI & Data Science Engineer. If you have a strong foundation in Python programming, SQL, and working with APIs, and are eager to learn and grow in the field of Artificial Intelligence (AI), Natural Language Processing (NLP), and Machine Learning (ML), this role is perfect for you! As part of our team, you will have the opportunity to work on cutting-edge AI technologies, including generative AI models, and develop solutions that solve real-world problems. Key Responsibilities: Learn and contribute to the design and development of AI and machine learning models. Work with structured and unstructured data to uncover insights and build predictive models. Assist in creating NLP solutions for tasks like text classification, sentiment analysis, and summarisation. Gain hands-on experience in deep learning for image processing, speech recognition, and generative AI. Write clean and efficient Python code for data analysis and model development. Work with SQL databases to retrieve and analyse data. Learn how to integrate APIS into AI workflows. Explore Generative AI technologies (e.g., GPT, DALL·E) and contribute to innovative solutions. Collaborate with senior team members to develop impactful AI-powered applications. Document your findings and contribute to knowledge-sharing within the team. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Strong Python programming skills and familiarity with libraries like Pandas, NumPy, and Matplotlib. Basic knowledge of SQL for data manipulation and extraction. Understanding of Machine Learning concepts and algorithms. Interest in Natural Language Processing (NLP) and familiarity with tools like spaCy, NLTK, or Hugging Face is a plus. Willingness to learn and work with Deep Learning frameworks such as TensorFlow or PyTorch. Problem-solving mindset with the ability to work independently and within a team. Good communication skills and enthusiasm for learning new technologies. Technical requirements: Windows 11 operating system or MacOS 13+ 256GB Storage space - minimum 16GB RAM - minimum Dual Core CPU - minimum Camera: HD Webcam (720p) Headset: Noise-cancelling (preferably) Internet Speed: 50 Mbps - minimum Why Join Us? Opportunity to work on cutting-edge data science, machine learning, and AI projects. A collaborative and inclusive work environment that values continuous learning and innovation. Access to resources and mentorship to enhance your skills in NLP, ML, DL, and Generative AI . Competitive compensation package and growth opportunities. Note: Include your LinkedIn Account in your Resume About The Company: Freedom Property Investors is the largest and number one property investment company in Australia, with its main offices in the Sydney and Melbourne CBDs. We were awarded the 3rd fastest-growing business in Australia across all industries according to the Australian Financial Review. We are privileged to have 25+ years of combined experience between our two founders, who served over 10,000 valued members and over 300 full-time staff spread across Australia and growing. We pride ourselves on being the industry leaders. It is our mission to serve our valued members, earning over 2,054 positive Google reviews and a 4.8 Star rating, this is unheard of in our industry. We are in need of people who share the same values as we do. This opportunity is open to all driven individuals who are committed to helping people and earning life-changing income. Join Australia’s largest and number 1 property investment team and contribute to our mission to help Australians achieve their goals of financial freedom every day. Apply now!!! Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 1 week ago
4.0 - 6.0 years
15 - 27 Lacs
Hyderabad
Work from Office
BULK HIRE – CORE PYTHON AND DATA SCIENCE ENGINEERS – HYDERABAD – HYBRID MODEL WELL KNOWN IT CLIENT– FOR THEIR USA PRODUCT DEV CLIENT LOCATION : Financial District, Nanakramguda ONLY HYD CANDIDATES 2 DAYS @ OFFICE, REST OF DAYS WFH PRIYA@AXYCUBE.IN Required Candidate profile PYTHON DEVELOPERS – MAX 16 LPA- 4+ YEARS EXPERIENCE – VERY GOOD IN NUMPY, PANDAS, NUMPY AND SQL DATA SCIENCE ENGINEERS – MAX 27 LPA– 4+ YEARS STRONG EXPERIENCE IN AI, ML, DL, PYTHON, PANDAS AND NUMPY
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings! One of our esteemed client Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan. The company acquired Italy -based Value Team S.p.A. and launched Global One Teams. Join this dynamic, high-impact firm where innovation meets opportunity — and take your career to new height s! 🔍 We Are Hiring: Python, PySpark and SQL Developer (8-12 years) Relevant Exp – 8-12 Years JD - • Python, PySpark and SQL • 8+ years of experience in Spark, Scala, PySpark for big data processing • Proficiency in Python programming for data manipulation and analysis. • Experience with Python libraries such as Pandas, NumPy. • Knowledge of Spark architecture and components (RDDs, DataFrames, Spark SQL). • Strong knowledge of SQL for querying databases. • Experience with database systems like Lakehouse, PostgreSQL, Teradata, SQL Server. • Ability to write complex SQL queries for data extraction and transformation. • Strong analytical skills to interpret data and provide insights. • Ability to troubleshoot and resolve data-related issues. • Strong problem-solving skills to address data-related challenges • Effective communication skills to collaborate with cross-functional teams. Role/Responsibilities: • Work on development activities along with lead activities • Coordinate with the Product Manager (PdM) and Development Architect (Dev Architect) and handle deliverables independently • Collaborate with other teams to understand data requirements and deliver solutions. • Design, develop, and maintain scalable data pipelines using Python and PySpark. • Utilize PySpark and Spark scripting for data processing and analysis • Implement ETL (Extract, Transform, Load) processes to ensure data is accurately processed and stored. • Develop and maintain Power BI reports and dashboards. • Optimize data pipelines for performance and reliability. • Integrate data from various sources into centralized data repositories. • Ensure data quality and consistency across different data sets. • Analyze large data sets to identify trends, patterns, and insights. • Optimize PySpark applications for better performance and scalability. • Continuously improve data processing workflows and infrastructure. Interested candidates, please share your updated resume along with the following details : Total Experience: Relevant Experience in Python, PySpark and SQL: Current Loc Current CTC: Expected CTC: Notice Period: 🔒 We assure you that your profile will be handled with strict confidentiality. 📩 Apply now and be part of this incredible journey Thanks, Syed Mohammad!! syed.m@anlage.co.in Show more Show less
Posted 1 week ago
24.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Date: Jun 6, 2025 Location: Mumbai Title: Python Executive_Mumbai Job Summary: We are seeking a skilled and motivated Python Developer with 24 years of hands-on experience in building scalable and efficient applications. The ideal candidate should have a solid understanding of Python programming and related frameworks, and be capable of contributing to both backend logic and integration with other technologies. Key Responsibilities Design, develop, test, and deploy Python-based applications and scripts. Write clean, efficient, and reusable code following best practices. Integrate third-party APIs and databases into backend systems. Collaborate with cross-functional teams including frontend developers, testers, and DevOps for product delivery. Debug and resolve issues in existing applications. Document technical specifications and system configurations. Participate in code reviews and contribute to team knowledge-sharing. Required Skills and Qualifications Bachelors degree in Computer Science, Engineering, or a related field. 24 years of professional experience in Python development. Strong knowledge of Python 3.x and object-oriented programming. Experience with frameworks such as Django or Flask. Hands-on experience with RESTful APIs, JSON, and web services. Experience with SQL and relational databases (e.g., MySQL, PostgreSQL). Familiarity with version control tools (e.g., Git). Strong problem-solving and analytical skills. Preferred Skills (Good to Have) Experience with Pandas, NumPy, or other data processing libraries. Exposure to cloud platforms like AWS, Azure, or GCP. Familiarity with Docker, CI/CD pipelines, or microservices architecture. Basic understanding of frontend technologies (HTML, CSS, JavaScript). Qualification Graduation No. of Job Positions 1 Total Experience 2-4 Years Domain Experience Back Office Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kolkata metropolitan area, West Bengal, India
On-site
Job description About Reboot Robotics Academy: Reboot Robotics Academy is dedicated to empowering students with future-ready skills in Robotics, AI, IoT, and Coding . We are seeking a AI & IoT Trainer to join our team and inspire the next generation of tech innovators! Key Responsibilities: Conduct hands-on training on AI, Machine Learning, and IoT . Teach fundamental to advanced Python concepts, including data structures, OOP, and automation. Chatbot designing Guide students through real-world AI projects using TensorFlow, OpenCV, and NLP. Introduce concepts of Deep Learning, Neural Networks, and AI Ethics . Provide training on IoT-based applications using ESP32, Arduino, and Raspberry Pi (preferred). Deliver Drone programming & Robotics workshops (if experienced). Assist in curriculum development , lesson planning, and creating study materials. Provide mentorship and guidance to students for projects and competitions. Stay updated with the latest trends in AI, ML, IoT, and automation technologies . Required Skills & Qualifications: Strong proficiency in Python (OOP, NumPy, Pandas, Matplotlib). Hands-on experience with AI/ML frameworks (TensorFlow, Keras, Scikit-learn). Knowledge of Deep Learning, NLP, and Computer Vision is a plus. Familiarity with IoT, Arduino, ESP32, Raspberry Pi , and sensor-based automation is preferred. Experience with drone programming is an added advantage. Prior experience in teaching, training, or mentoring is preferred. Excellent communication & presentation skills . Passion for education, technology, and innovation . Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, AI, Data Science, IoT, or a related field. Experience in STEM, Robotics & IoT education is an advantage. Certifications in AI/ML, IoT, or Drone Technology are a plus. Why Join Us? Work with a leading EdTech academy shaping the future of AI, IoT & Robotics. Opportunity to mentor young minds in cutting-edge technology. Engage in innovation-driven projects & research opportunities . Growth opportunities in AI, IoT, and drone automation training . Job Types: Full-time, Fresher Pay: ₹8,000.00 - ₹20,000.00 per month Benefits: Leave encashment Paid sick time Schedule: Day shift Fixed shift Weekend availability Supplemental Pay: Yearly bonus Language: English (Required) Work Location: In person Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Numpy is a widely used library in Python for numerical computing and data analysis. In India, there is a growing demand for professionals with expertise in numpy. Job seekers in this field can find exciting opportunities across various industries. Let's explore the numpy job market in India in more detail.
The average salary range for numpy professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
Typically, a career in numpy progresses as follows: - Junior Developer - Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
In addition to numpy, professionals in this field are often expected to have knowledge of: - Pandas - Scikit-learn - Matplotlib - Data visualization
np.where()
function in numpy. (medium)np.array
and np.matrix
in numpy. (advanced)As you explore job opportunities in the field of numpy in India, remember to keep honing your skills and stay updated with the latest developments in the industry. By preparing thoroughly and applying confidently, you can land the numpy job of your dreams!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.