Jobs
Interviews

6 Streaming Analytics Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

You will be responsible for architecting and building the server and serverless backend infrastructures for an educational platform that provides personalized learning experiences. This involves designing scalable APIs, implementing complex business logic, and integrating with external systems such as gaming servers and educational content providers. Expertise in Azure serverless technologies and educational data management is essential for this role. Your key responsibilities will include: - Designing and implementing serverless APIs using Azure Functions and TypeScript - Deploying SSR-optimized web experiences using NextJS and TypeScript - Developing complex business logic for adaptive learning algorithms and content delivery - Building robust data models for courses, playbooks, quests, and analytics - Implementing authentication and authorization systems using MSAL and Azure AD - Designing event-driven architectures for real-time data processing You will also be responsible for: - Designing and optimizing Azure Cosmos DB container strategies and data models - Implementing efficient querying patterns for educational content and user analytics - Managing data relationships between users, content, assessments, and performance metrics - Ensuring data consistency and integrity across distributed systems - Optimizing database query performance for scale (supporting 5,000-25,000+ students) Additionally, you will: - Build RESTful APIs for content management, user authentication, and analytics - Integrate with external systems including gaming servers and OpenStax content - Develop real-time scoring and assessment APIs - Create secure file upload and content management endpoints Other responsibilities include: - Implementing caching strategies using Redis for optimal performance - Designing auto-scaling solutions for variable educational workloads - Monitoring and optimizing API performance and database queries - Implementing proper error handling and logging for production systems - Ensuring high availability and disaster recovery capabilities To be successful in this role, you need: - 3-5+ years of backend development experience with TypeScript/Node.js - 2-3+ years of Azure cloud services experience, particularly Azure Functions - Expert knowledge of Azure Cosmos DB or other NoSQL database systems - Strong experience with serverless architecture patterns and event-driven design - Proficiency with authentication systems (Azure AD, MSAL, OAuth) - Experience with API design, documentation, and versioning strategies You should also have experience in: - Building systems that handle educational content and user progress tracking - Understanding of learning management systems (LMS) or educational platform requirements - Experience with assessment and analytics systems - Familiarity with adaptive learning or personalization algorithms In addition, you should possess: - Strong experience with automated testing (unit, integration, and end-to-end) - Proficiency with CI/CD pipelines and Github Actions - Experience with microservices architecture and API gateway patterns - Knowledge of security best practices for educational applications - Familiarity with Nx monorepo build systems or similar Moreover, experience in: - Integrating with external gaming platforms - Real-time data processing and streaming analytics - Understanding of content delivery networks and media management - Optimizing applications for high concurrent user loads You will also be involved in addressing specific project challenges such as: - Handling data complexity by designing sophisticated data models for analytics - Meeting integration requirements by building secure APIs and integrating with external systems - Addressing scalability demands by designing systems to scale and implementing efficient data processing for real-time analytics and reporting.,

Posted 21 hours ago

Apply

4.0 - 10.0 years

0 Lacs

karnataka

On-site

Join a team recognized for leadership, innovation, and diversity. You must have a Ph.D. or Master's degree in Computer Science, Engineering, Applied Mathematics, or a related field. Exposure to the Finance domain and use cases in a larger global enterprise setting is required. Additionally, you should have a minimum of 8 to 10 years of Data Science prototyping experience using machine learning techniques and algorithms such as k-means, k-NN, Naive Bayes, SVM, and Decision Trees, with proficiency in Python and/or R tool-stack. Moreover, a minimum of 8 to 10 years of Machine Learning experience of physical systems is expected. You should also possess a minimum of 4 to 6 years of experience with distributed storage and compute tools like Hive and Spark. Furthermore, a minimum of 8 to 10 years of experience in deep learning frameworks like PyTorch and Keras is essential. Experience in designing, building models, and deploying models to Cloud Platforms such as Azure and Databricks is a plus. Having a working knowledge and experience of implementing Generative AI in the industry and keeping up with the latest developments in the field of Artificial Intelligence is highly valued. A research mindset with a problem-solving attitude is a MUST. Experience with Natural Language Processing models, Streaming Analytics (i.e., Spark Streaming), Recurrent Neural Network architectures, Image Analytics, SQL, and working with remote and global teams is beneficial. Knowledge of Corporate Finance or Financial Analytics is advantageous. The ideal candidate should be results-driven with a positive can-do attitude. If you meet these requirements, this role could be a great fit for you. Please note the following additional information: JOB ID: HRD251841 Category: Engineering Location: Devarabisanahalli Village, KR Varturhobli, East Taluk - Phase I, Bangalore, KARNATAKA, 560103, India Exempt Early Career (ALL),

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. We are seeking a Senior AI Engineer / Data Engineer to join our engineering team and help build the future of AI-powered business solutions. In this role, you&aposll be developing intelligent systems that leverage advanced large language models (LLMs), real-time AI interactions, and cutting-edge retrieval architectures. Your work will directly contribute to products that are reshaping how businesses operate-particularly in recruitment, data extraction, and intelligent decision-making. This is an exciting opportunity for someone who thrives in building production-grade AI systems and working across the full stack of modern AI technologies. Responsibilities Design, build, and optimize AI-powered systems using multi-modal architectures (text, voice, visual). Integrate and deploy LLM APIs from providers such as OpenAI, Anthropic, and AWS Bedrock. Build and maintain RAG (Retrieval-Augmented Generation) systems with hybrid search, re-ranking, and knowledge graphs. Develop real-time AI features using streaming analytics and voice interaction tools (e.g., ElevenLabs). Build APIs and pipelines using FastAPI or similar frameworks to support AI workflows. Process and analyze unstructured documents with layout and semantic understanding. Implement predictive models that power intelligent business recommendations. Deploy and maintain scalable solutions using AWS services (EC2, S3, RDS, Lambda, Bedrock, etc.). Use Docker for containerization and manage CI/CD workflows and version control via Git. Debug, monitor, and optimize performance for large-scale data pipelines. Collaborate cross-functionally with product, data, and engineering teams. Qualifications 5+ years of experience in AI/ML or data engineering with Python in production environments. Hands-on experience with LLM APIs and frameworks such as OpenAI, Anthropic, Bedrock, or LangChain. Production experience using vector databases like PGVector, Weaviate, FAISS, or Pinecone. Strong understanding of NLP, document extraction, and text processing. Proficiency in AWS cloud services including Bedrock, EC2, S3, Lambda, and monitoring tools. Experience with FastAPI or similar frameworks for building AI/ML APIs. Familiarity with embedding models, prompt engineering, and RAG systems. Asynchronous programming knowledge for high-throughput pipelines. Experience with Docker, Git workflows, CI/CD pipelines, and testing best practices. Preferred Background in HRTech or ATS integrations (e.g., Greenhouse, Workday, Bullhorn). Experience working with knowledge graphs (e.g., Neo4j) for semantic relationships. Real-time AI systems (e.g., WebRTC, OpenAI Realtime API) and voice AI tools (e.g., ElevenLabs). Advanced Python development skills using design patterns and clean architecture. Large-scale data processing experience (1-2M+ records) with cost optimization techniques for LLMs. Event-driven architecture experience using AWS SQS, SNS, or EventBridge. Hands-on experience with fine-tuning, evaluating, and deploying foundation models. Show more Show less

Posted 1 month ago

Apply

2.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 18 Lacs

Noida, Pune, Bengaluru

Work from Office

Lead the technical discovery process, assess customer requirements, and design scalable solutions leveraging a comprehensive suite of Data & AI services, including BigQuery, Dataflow, Vertex AI, Generative AI solutions, and advanced AI/ML services like Vertex AI, Gemini, and Agent Builder. Architect and demonstrate solutions leveraging generative AI, large language models (LLMs), AI agents, and agentic AI patterns to automate workflows, enhance decision-making, and create intelligent applications. Develop and deliver compelling product demonstrations, proofs-of-concept (POCs), and technical workshops that showcase the value and capabilities of Google Cloud. Strong understanding of data warehousing, data lakes, streaming analytics, and machine learning pipelines. Collaborate with sales to build strong client relationships, articulate the business value of Google Cloud solutions, and drive adoption. Lead and contribute technical content and architectural designs for RFI/RFP responses and technical proposals leveraging Google Cloud Services. Stay informed of industry trends, competitive offerings, and new Google Cloud product releases, particularly in the infrastructure and data/AI domains. Extensive experience in architecting & designing solutions on Google Cloud Platform, with a strong focus on: Data & AI services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI (ML Ops, custom models, pre-trained APIs), Generative AI (e.g., Gemini). Strong understanding of cloud architecture patterns, DevOps practices, and modern software development methodologies. Ability to work effectively in a cross-functional team environment with sales, product, and engineering teams. 5+ years of experience in pre-sales or solutions architecture, focused on cloud Data & AI platforms. Skilled in client engagements, technical presentations, and proposal development. Excellent written and verbal communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Location-Noida,Pune,Bengaluru,Hyderabad,Chennai

Posted 2 months ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: We are seeking a highly motivated and experienced Senior Data Scientist with a proven track record of success in conducting in depth data analysis developing machine learning models and delivering actionable insights to drive business decisions The ideal candidate will have a deep understanding of statistics machine learning algorithms and data science best practices Key Responsibilities: Evaluate the performance of machine learning models and refine them to improve accuracy andgeneralizability Communicate data insights to stakeholders in a clear and concise manner using data visualization techniques and storytelling collaborate with data engineers software developers and business stakeholders to integrate data science solutions into products and services Stay up to date with the latest trends and developments in data science machine learning and artificial intelligence Technical Requirements: Master s degree in Computer Science Statistics Mathematics or a related field 7 years of experience in data science and machine learning with a strong focus on model development and deployment Expert level knowledge of statistics including probability theory hypothesis testing and statistical inference In depth knowledge of machine learning algorithms including linear regression logistic regression decision trees random forests xgboost and ensemble learning Additional Responsibilities: Experience with Natural Language Processing NLP and Computer Vision CV techniques Knowledge of DevOps methodologies and practices for continuous integration continuous delivery CI CD Experience with data warehousing and data lakes solutions like BigQuery or Snowflake Familiarity with real time data processing and streaming analytics Passion for learning and staying at the forefront of data science and machine learning advancements Preferred Skills: Technology->Analytics - Techniques->Cluster Analysis,Technology->Analytics - Techniques->Decision Trees,Technology->Analytics - Techniques->Linear Regression,Technology->Machine Learning->Python

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies