Jobs
Interviews

83 Streaming Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

2 - 7 Lacs

kochi

Remote

Looking for an experienced Linux Server Engineer with expertise in streaming tech (WebRTC, RTMP, Ant Media), MongoDB, Redis, security, scaling & infra optimization. Short-term contract, immediate joiners preferred.

Posted 3 days ago

Apply

8.0 - 13.0 years

2 - 2 Lacs

hyderabad

Work from Office

SUMMARY Key Responsibilities: Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements: Bachelor's or Master's degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Data Architecture and Engineering: Hadoop Ecosystem: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing: Cloud - based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On - premises: , Teradata, Vertica Data Integration and ETL Tools: Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms: Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud - native Components: Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms: Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI and Gen AI Technologies: AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI: Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Orchestration and Deployment: Kubernetes: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD for Data Pipelines and Applications: Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

Posted 4 days ago

Apply

3.0 - 8.0 years

16 - 22 Lacs

ahmedabad

Remote

Job Title: Senior Software Engineer I/II- Video Streaming Engineer Department: Technology Reports to: Software Engineering Manager Experience: 3+ years Location: Ahmedabad, Pune, India (Remote option Available) Company Introduction Genea was built on a foundation of listening to and serving our commercial real estate customers needs starting with our flagship Overtime HVAC product. Over the years, weve earned the trust and loyalty of 21 of the top 25 largest commercial real estate companies in the US and continue to prove our value to them every day. Our clients depend on us to develop value-added technology solutions to solve other pain points in their operations. We have grown our product portfolio to serve the complex and expanding needs of property teams. Genea is a family of solutions are designed to improve commercial real estate operations through cutting-edge technology. Revolutionize access control, streamline overtime HVAC, and economize submeter billing. Our host of cloud-based, automated solutions are built to cut costs, reduce admin times, and maximize the tenant experience. Experiencing enormous growth with aggressive expansion plans, Genea is expanding a Software Development and Operations center in India and is looking for dynamic engineering leaders, who can become part of this already successful growth story, and partner in creating a world class team and products. Overview Senior Software Engineer I/II focus on applying the principles of engineering to software development. The role includes analyzing and modifying existing software as well as creating new software and designing, constructing and testing end-user applications that meet user needs — all through software programming languages. Genea is an engineering company at heart. We hire people with a broad set of technical skills who are ready to take on some technology’s greatest challenges and make an impact on Genea’s end users. A software engineer's approach should be customer-centric and result-driven. Software engineer needs to combine computer science principles with innovative thinking to solve daily software development tasks. Transparency and teamwork and dedication are essential qualities of a software engineer. What You'll Do Write and test product or system development code. Design and implement video streaming and video processing services necessary to support new and existing features. Design metrics that capture the streaming experience and system performance. Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Maintain a pulse on emerging technologies and discover hidden opportunities in our environment. Ensure redundancy and resilience of Genea production infrastructure. What We Look For A scrappy, entrepreneurial attitude that gets high-quality projects done quickly. Expert in at least one general-purpose programming language. Node.JS, Python, Rust, or C/C++. REST API development hands on experience. Knowledge of multiple streaming protocols (RTMP, RTSP, RTP, HLS, WebRTC, DASH, etc.) and codecs (AAC, Opus, H264, H265, VP8, VP9, AV1, etc.). Highly proficient in database design, with both relational and NoSQL databases. Comfortable working with AWS, Linux, Docker, continuous deployment workflow, multiple programming languages tech stack. Strong written and verbal communication skills. Self-directed, analytical, and work well in a team environment. Passionate about the Genea product. Experience with multiple multimedia libraries and frameworks: FFmpeg, GStreamer, libvpx, x264, x265, etc. is a plus. Experience on building video pipeline with AI inference for computer vision is a plus. Competencies: Diversity - Shows respect and sensitivity for cultural differences; Educates others on the value of diversity; Promotes a harassment-free environment; Builds a diverse workforce. Ethics - Treats people with respect; Keeps commitments; Inspires the trust of others; Works with integrity and ethically; Upholds organizational values. Adaptability - Adapts to changes in the work environment; Manages competing demands; Changes approach or method to best fit the situation; Able to deal with frequent change, delays, or unexpected events. Attendance/Punctuality - Is consistently at work and on time; Ensures work responsibilities are covered when absent; Arrives at meetings and appointments on time. Design - Generates creative solutions; Translates concepts and information into images; Uses feedback to modify designs; Applies design principles; Demonstrates attention to detail. Oral Communication - Speaks clearly and persuasively in positive or negative situations; Listens and gets clarification; Responds well to questions; Demonstrates group presentation skills; Participates in meetings. Problem Solving - Identifies and resolves problems in a timely manner; Gathers and analyzes information skillfully; Develops alternative solutions; Works well in group problem solving situations; Uses reason even when dealing with emotional topics. Professionalism - Approaches others in a tactful manner; Reacts well under pressure; Treats others with respect and consideration regardless of their status or position; Accepts responsibility for own actions; Follows through on commitments. Quality - Demonstrates accuracy and thoroughness; Looks for ways to improve and promote quality; Applies feedback to improve performance; Monitors own work to ensure quality. Quantity - Meets productivity standards; Completes work in timely manner; Strives to increase productivity; Works quickly. Safety and Security - Observes safety and security procedures; Determines appropriate action beyond guidelines; Reports potentially unsafe conditions; Uses equipment and materials properly. Teamwork - Balances team and individual responsibilities; Exhibits objectivity and openness to others' views; Gives and welcomes feedback; Contributes to building a positive team spirit; Puts success of team above own interests; Able to build morale and group commitments to goals and objectives; Supports everyone's efforts to succeed. Technical Skills - Assesses own strengths and weaknesses; Pursues training and development opportunities; Strives to continuously build knowledge and skills; Shares expertise with others. Written Communication - Writes clearly and informatively; Edits work for spelling and grammar; Varies writing style to meet needs; Presents numerical data effectively; Able to read and interpret written information. Perks and benefits we offer: Work Your Way: Enjoy a flexible working environment that suits your lifestyle. Time Off: 24 days of PTO and 10 holidays to unwind and pursue your passions. Comprehensive Leave Options: Including maternity, paternity, adoption, wedding, and bereavement leaves to support you during important life events. Health & Safety First: Benefit from family health insurance and personal accident coverage beyond your CTC. Top Workplace Honors: Celebrated as a Top Workplace from 2021 to 2023. Balanced Workweek: Embrace a balanced life with our 5-day work schedule

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Backend Developer, you will be expected to have the following qualifications and responsibilities: **Role Overview:** You should have a strong proficiency in at least one modern backend language such as Java, Go, or Rust. Additionally, a solid understanding of data structures, algorithms, concurrency, and system design is crucial. It is essential that you have worked with reactive programming and have experience with microservices architecture, REST/gRPC APIs, and event-driven systems. Hands-on experience with databases including SQL and NoSQL (e.g., Postgres, MongoDB, Cassandra, Scylla) is required. Familiarity with streaming/queueing systems like Kafka, cloud platforms such as AWS or GCP, and container orchestration tools like Docker and Kubernetes is highly desirable. You should also have a good grasp of observability including metrics, logging, tracing, and monitoring tools like Prometheus, Grafana, and ELK. Strong problem-solving and debugging skills in complex distributed systems are a must, along with the ability to work effectively in a fast-paced, agile environment. **Key Responsibilities:** - Proficiency in at least one modern backend language (e.g., Java, Go, Rust) - Understanding of data structures, algorithms, concurrency, and system design - Worked with reactive programming - Experience with microservices architecture, REST/gRPC APIs, and event-driven systems - Hands-on experience with databases (SQL and NoSQL: Postgres, MongoDB, Cassandra, Scylla, etc.) - Familiarity with streaming/queueing systems like Kafka - Knowledge of cloud platforms (AWS/GCP) and container orchestration (Docker, Kubernetes) - Understanding of observability (metrics, logging, tracing) and monitoring tools (Prometheus, Grafana, ELK) - Strong problem-solving and debugging skills in complex distributed systems - Ability to work in a fast-paced, agile environment This job description focuses on the technical skills and experience required for a Backend Developer role. If you meet the qualifications and are ready to take on these responsibilities, we encourage you to apply and become a part of our team.,

Posted 5 days ago

Apply

8.0 - 12.0 years

20 - 35 Lacs

pune

Work from Office

This role is accountable for running day to day operations of Data Platform in Azure / AWS Databricks NCS. Data Engineer is accountable for ongoing Development, Enhancement support and maintenance data availability and data quality, performance enhancement and stability of the system. Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks Ensure data pipelines run smoothly and efficiently Adherence to security, regulatory and audit control guidelines Drive optimization, continuous improvement and efficiency Roles and Responsibilities 1 Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks 2 Ensure data pipelines run smoothly and efficiently 3 Developing scalable and re-usable frameworks for ingesting of data sets 4 Integrating the end to end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times 5 Working with event based / streaming technologies to ingest and process data 6 Working with other members of the project team to support delivery of additional project components (API interfaces, Search) 7 Evaluating the performance and applicability of multiple tools against customer requirements 8 Technically competent in Cloud and Databricks to provide technical advice to the team and to be involved in issue resolution 9 Provide on-call support and afterhours/weekend support as needed 10 Fulfill Service Requests related to Data Analytics platform 11 Lead and drive optimization and continuous improvement initiatives 12 Play gate-keeping role and conduct technical review of the changes as part of release management 13 Understand various data security standards and adhere to the required data security controls in the platform 14 Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform. 15 Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions. 16 Own architectural decisions and ensure adherence to data governance, security, and compliance requirements. 17 Mentor a team of data engineers, providing technical guidance and career development. Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins.

Posted 5 days ago

Apply

15.0 - 19.0 years

0 Lacs

hyderabad, telangana

On-site

As an Engineering Manager (Hands-on) in Treasury & Capital Markets at our company, you will play a crucial role in leading a team of engineers to architect, design, and deliver cutting-edge platforms for Treasury & Markets. With a focus on high availability, resilience, and scalability, you will be responsible for translating business problems into technology solutions, driving modernization initiatives, and leveraging AI tools for enhanced productivity. Your key responsibilities will include leading and mentoring a team of engineers, staying hands-on in architecture and coding, and driving engineering excellence in areas such as clean code, CI/CD pipelines, and cloud-native best practices. You will also be involved in solutioning by architecting microservices-based platforms and leading modernization journeys towards cloud adoption and containerization. Additionally, you will apply AI tools for rapid prototyping and code generation, as well as drive the use of ML/AI in areas such as cashflow forecasting and risk monitoring. Your qualifications should include a strong background in software engineering, hands-on experience with Java, Python, and microservices, as well as domain knowledge in Treasury, ALM, Risk, and trading workflows. Overall, we are looking for a seasoned professional with over 15 years of experience in software engineering and proven expertise in leading high-performing technology teams. If you have a track record of delivering mission-critical platforms in the BFSI sector and excel in stakeholder management, we would like to hear from you. Join us in revolutionizing the Treasury & Capital Markets space with your technical expertise and leadership skills.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Technical Marketing Product Manager at AMD, you will play a crucial role in driving the long-term strategy, vision, and roadmap to deliver high-quality, industry-leading technologies to market. Your primary focus will be on DDR and AXI IP, as well as related high-speed protocols. You will be responsible for owning the product life cycle of AMD-AECG's DDR and AXI IP portfolio from definition to production and EOL. Collaborating closely with various teams within AMD, including HW & SW Engineering and Architects, you will work towards driving technology innovation in IPs, SW, and platforms. Your key responsibilities will include delivering technical content to customers through webinars, white papers, and training sessions, as well as understanding and addressing customer challenges to ensure their success. You will also be involved in partnering across different organizations to advance technology innovation. Preferred experience for this role includes familiarity with FPGA design, interconnect and bus interfaces, ARM processors, DDR4/5, LPDDR4/5, AXI-Lite, AXI-4 Memory Map, and Streaming, among others. Additionally, a demonstrated ability to engage in technical discussions, manage tradeoffs, and evaluate new ideas will be beneficial. To qualify for this position, you should hold a Bachelor's or Master's degree in Electrical/Computer Engineering with at least 4 years of experience in the field. By joining AMD, you will become part of a culture that values innovation and excellence while fostering collaboration and inclusivity. Together, we strive to push the boundaries of technology to address the world's most critical challenges and create a positive impact on our industry and communities.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You will be part of the Software/QA Engineering Teams dedicated to working on our Clients TV-OS product. Your responsibilities will include partner SOC board bring-up, third-party API integration, feature development, code maintenance, designing scalability solutions, and supporting QA teams. As a technical leader, you will actively engage in design, development, code reviews, defining test cases, and leading a small development team. Collaboration with the QA team is essential to ensure the successful delivery of high-quality products. This role offers an opportunity to be involved in end-to-end product development and release cycles. The ideal candidate should possess a strong sense of curiosity, a global mindset, and the ability to execute locally. Problem-solving abilities, self-starting initiative, and innovative thinking are key qualities for success in this role. To excel in this position, you should have a minimum of 6+ years of experience in developing software for Linux-based embedded systems, a solid understanding of multimedia embedded domains, proficiency in embedded C and C++ programming, excellent debugging and problem-solving skills, familiarity with Linux Kernel concepts and tools, experience in multi-threading/core primitives, and prior exposure to leading and technically mentoring small development teams. A Bachelor's or Master's degree in CS Engineering or Electronics/Electrical Engineering is required. Nice to have skills include modern C++ knowledge (C++11 to 20) and contributions to open-source development. Standing out from the crowd would involve knowledge of broadcast technology, digital TV standards (such as DVB, HBBTV, CI Plus), experience in developing software for set-top boxes or TVs, and familiarity with graphics, audio, video, streaming, and media player technology. Personal attributes that would be advantageous for this role include being an excellent team player with technical leadership qualities, the ability to work effectively in a fast-paced engineering environment, and strong verbal and written communication skills. At GlobalLogic, we prioritize a culture of caring, continuous learning and development, offering meaningful work opportunities, and promoting balance and flexibility in work-life integration. We are a high-trust organization that values integrity and ethical practices. By joining GlobalLogic, you will have the chance to work on impactful projects, collaborate with forward-thinking clients, and be part of a trusted digital engineering partner shaping the digital revolution.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will work as part of a cross-functional agile delivery team, bringing an innovative approach to software development. Your focus will be on using the latest technologies and practices to deliver business value. You should view engineering as a team activity, promoting open code, open discussion, and a supportive, collaborative environment. You will contribute to all stages of software delivery, from initial analysis to production support. As a Full Stack, Associate based in Pune, India, you will be responsible for developing, enhancing, modifying, and maintaining applications in the Enterprise Risk Technology environment. This role involves designing, coding, testing, debugging, and documenting programs, as well as supporting activities for the corporate systems architecture. Working closely with business partners to define requirements for system applications, you will utilize your in-depth knowledge of development tools and languages. This position is recognized as a content expert by peers and requires 5-7 years of applicable experience. Your key responsibilities will include developing software in Java, object-oriented database, and grid using Kubernetes & OpenShift platform, building REST web services, designing interfaces between UI and REST service, and building data-grid centric UI. You will participate fully in the agile software development process, using BDD techniques and collaborating closely with users, analysts, developers, and testers to ensure the right product is built. Writing clean code, refactoring constantly, and working on a range of technologies and components are essential aspects of this role. Having a deep knowledge of modern programming languages, understanding object-oriented and functional programming, and practical experience with test-driven development in a continuous integration environment are crucial skills for this role. Experience with web technologies, frameworks, tools like HTML, CSS, JavaScript, ReactJS, and Oracle PL/SQL programming is required. Familiarity with SQL, relational databases, agile methodologies, and functional analysis is highly desirable. The ideal candidate will also have experience with Behavior Driven Development, a range of technologies for data storage and manipulation, and exposure to architecture and design approaches that support rapid, incremental, and iterative delivery. Training and development opportunities, coaching, and a culture of continuous learning will be provided to support your career progression. Join our team at Deutsche Bank Group, where we strive for a culture of empowerment, responsibility, commercial thinking, initiative, and collaboration. We welcome applications from all individuals and promote a positive, fair, and inclusive work environment. Visit our company website for further information: [Deutsche Bank Company Website](https://www.db.com/company/company.htm).,

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 18 Lacs

india, bengaluru

Work from Office

Job Requirements Must Have: Over 12 years of extensive hands-on experience in Python and distributed computing concepts. Over 5 years of experience in Kafka or similar streaming technologies. Over 5 years of experience in Spark (Pyspark preferred). Experienced in working with cloud native technologies, such as, Kubernetes. Good understanding of devops concepts and has hands-on implementation experience in designing CI/CD solutions. Extensive experience in working with end users directly to understand functional requirements and design, build, and support complex data engineering solutions. Experienced in working with RDBMS and NOSQL database technologies. Experienced in working with global engineering team. Excellent analytical and communication skills. Nice to Have: Experienced in working with SQL. Willing to research and learn new technologies.

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

bengaluru

Work from Office

We are seeking Data Engineers (GCP) who can develop and maintain a high-performance streaming data platform for a leading healthcare CRM product. You will work across technologies like Python, PySpark, SQL, Continuous Integration/ Continuous Development, Kafka, GCP - BigQuery, Google cloud Compute, Looker, Data Modelling. Required Skill Set: Programming & Data Engineering: Python, PySpark, SQL, CICD Streaming & Orchestration: Kafka, GCP Composer/Airflow Cloud Tools: Google Cloud Platform (Dataproc, BigQuery, Looker) Data Architecture: Data Warehousing, Data Modeling Location : Manyata Tech Park, Bengaluru Roles and Responsibilities Responsibilities: Design, develop, and maintain scalable data pipelines to process and transform large volumes of structured and unstructured data. Build and maintain ETL/ELT workflows for data ingestion from various sources (APIs, databases, files, cloud). Ensure data quality, integrity, and governance across the pipeline. Collaborate with data scientists, analysts, and business teams to understand data requirements.

Posted 1 week ago

Apply

8.0 - 12.0 years

12 - 16 Lacs

bengaluru

Work from Office

Design and maintain high-performance SQL queries to support analytics and real-time decision-making. Build and scale distributed data pipelines to support a variety of e-commerce functions including order tracking, recommendations, and dynamic pricing. Implement real-time data processing using RisingWave, Azure , or ect delivering insights on user behavior, product trends, and operational efficiency. Lead and mentor a team of data engineers, setting technical direction and ensuring high-quality delivery. Collaborate with product, engineering, and analytics teams to translate business needs into scalable data solutions. Own and drive data-driven initiatives across the organization with measurable business impact. Should have good exposure in Azure Data technologies

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You should have at least 7 years of experience in Android development with a strong understanding of Clean Architecture, MVVM/MVI, and modular design principles. Your knowledge of Android APIs and platform capabilities should be thorough, along with expertise in RESTful API integrations and robust caching strategies. Proficiency in Kotlin, Coroutines, and Flow is required, along with a strong understanding of multi-threading, memory optimization, and performance tuning in Android applications. Familiarity with Agile methodologies and modern development practices is essential, as well as excellent communication skills for collaborating with global teams and stakeholders. Experience with CI/CD pipelines for Android applications is also expected. You must have hands-on experience with Kotlin Multiplatform (KMP) and ideally have worked in the media, streaming, or content industry. Familiarity with video streaming protocols such as HLS, DASH, DRM solutions, and media playback frameworks like MediaPlayer and ExoPlayer is preferred. Experience with Firebase services and analytics implementation, as well as knowledge of app performance optimization techniques, will be beneficial. In return, we offer competitive salaries and comprehensive health benefits, along with flexible work hours and remote work options. You will have access to professional development and training opportunities in a supportive and inclusive work environment.,

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

hyderabad, telangana

On-site

Artmac Soft is a technology consulting and service-oriented IT company dedicated to providing innovative technology solutions and services to Customers. As an AI Engineer with Python at Artmac Soft, you will be responsible for leveraging your 1 year of experience in AI engineering, with a focus on Python programming. Your role will involve utilizing your knowledge of Cursor AI, Render, and CloudFlare, along with proficiency in Python libraries and frameworks like TensorFlow, PyTorch, sci-kit-learn, NumPy, and pandas. You will need a strong understanding of machine learning algorithms and deep learning architectures to develop AI/ML models that address business challenges. In this role, you will collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate AI solutions into products. Your responsibilities will also include building and maintaining scalable data pipelines for machine learning workflows, optimizing AI models for performance and accuracy, and developing APIs and services to deploy AI models in production environments. To qualify for this position, you should hold a Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Additionally, you must have experience with cloud platforms like AWS, GCP, or Azure, as well as containerization tools such as Docker and Kubernetes. Knowledge of CI/CD pipelines, MLOps practices, NLP, computer vision, distributed computing, big data technologies, version control systems like Git, real-time data processing, and streaming will be beneficial. You will also be expected to stay updated on the latest advancements in AI, machine learning, and deep learning, while documenting AI solutions, workflows, and best practices. Monitoring, evaluating, and troubleshooting deployed models to ensure reliability and performance will be part of your routine tasks, along with data preprocessing, feature engineering, and exploratory data analysis. If you are passionate about AI, Python programming, and leveraging cutting-edge technologies to drive business outcomes, we encourage you to apply for this Full-Time position based in Hyderabad, Telangana. Join us at Artmac Soft and be part of a team that is dedicated to delivering innovative technology solutions to our customers.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

bengaluru

Work from Office

Description: 1. Data engineer with 6+ years of hands on experience working on Big Data Platforms 2. Experience building and optimizing Big data data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization. 3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and steaming data processing 4. Good experience in any one programming language -Scala/Python , Python preferred. 5. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins,Views etc 6. Experience in using Kafka or any other message brokers 7. Configuring, monitoring and scheduling of jobs using Oozie and/or Airflow 8. Processing streaming data directly from Kafka using Spark jobs, expereince in Spark- streaming is must 9. Should be able to handling different file formats (ORC, AVRO and Parquet) and unstructured data 10. Should have experience with any one No SQL databases like Amazon S3 etc 11. Should have worked on any of the Data warehouse tools Requirements: 1. Data engineer with 6+ years of hands - on experience working on Big Data Platforms 2. Experience building and optimizing Big data data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization. 3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and steaming data processing 4. Good experience in any one programming language -Scala/Python , Python preferred. 5. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins,Views etc 6. Experience in using Kafka or any other message brokers 7. Configuring, monitoring and scheduling of jobs using Oozie and/or Airflow 8. Processing streaming data directly from Kafka using Spark jobs, expereince in Spark- streaming is must 9. Should be able to handling different file formats (ORC, AVRO and Parquet) and unstructured data 10. Should have experience with any one No SQL databases like Amazon S3 etc 11. Should have worked on any of the Data warehouse tools like AWS Redshift or Snowflake or BigQuery etc 12. Work expereince on any one cloud AWS or GCP or Azure Good to have skills: 1. Experience in AWS cloud services like EMR, S3, Redshift, EKS/ECS etc 2. Experience in GCP cloud services like Dataproc, Google storage etc 3. Experience in working with huge Big data clusters with millions of records 4. Experience in working with ELK stack, specially Elasticsearch 5. Experience in Iceberg , Hadoop MapReduce, Apache Flink, Kubernetes etc Job Responsibilities: 1. Data engineer with 6+ years of hands on experience working on Big Data Platforms 2. Experience building and optimizing Big data data pipelines and data sets ranging from Data ingestion to Processing to Data Visualization. 3. Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and steaming data processing 4. Good experience in any one programming language -Scala/Python , Python preferred. 5. Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins,Views etc 6. Experience in using Kafka or any other message brokers 7. Configuring, monitoring and scheduling of jobs using Oozie and/or Airflow 8. Processing streaming data directly from Kafka using Spark jobs, expereince in Spark- streaming is must 9. Should be able to handling different file formats (ORC, AVRO and Parquet) and unstructured data 10. Should have experience with any one No SQL databases like Amazon S3 etc 11. Should have worked on any of the Data warehouse tools like AWS Redshift or Snowflake or BigQuery etc 12. Work expereince on any one cloud AWS or GCP or Azure Good to have skills: 1. Experience in AWS cloud services like EMR, S3, Redshift, EKS/ECS etc 2. Experience in GCP cloud services like Dataproc, Google storage etc 3. Experience in working with huge Big data clusters with millions of records 4. Experience in working with ELK stack, specially Elasticsearch 5. Experience in Hadoop MapReduce, Apache Flink, Kubernetes etc What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 2 weeks ago

Apply

15.0 - 19.0 years

0 Lacs

pune, maharashtra

On-site

Join the Team That's Redefining Wireless Technology At Tarana, you will have the opportunity to contribute to building a cutting-edge cloud product, a management system for wireless networks that scales to millions of devices. You will be involved in utilizing modern cloud-native architecture and open-source technologies. Your responsibilities will include designing and implementing distributed software in a microservices architecture. This will involve tasks such as requirements gathering, high-level design, implementation, integrations, operations, troubleshooting, performance tuning, and scaling. As a key member of the team, you will provide technical and engineering leadership to an R&D team that manages multiple microservices end-to-end. You can expect to work on Proof of Concepts, customer pilots, and production releases in an agile engineering environment. The role will present daily challenges that will allow you to enhance and expand your skills. Meeting high standards of quality and performance will be a core focus, and the necessary mentoring will be provided to support your success. The position is based in Pune and will require your in-person presence in the office for collaboration with team members. Responsibilities: - Minimum of 15 years of software development experience, with at least 5 years in large-scale distributed software - Experience in product architecture and design, including providing technical leadership to engineering teams - Familiarity with building SaaS product offerings or IoT applications - Bonus points for experience in not only developing but also operating and managing such systems Required Skills & Experience: - Bachelor's degree (or higher) in Computer Science or a related field from a reputable university; a Master's or Ph.D. is preferred - Proficiency in software design and development in Java and its associated ecosystem (e.g., Spring Boot, Hibernate, etc.) - Expertise in microservices and RESTful APIs, covering design, implementation, and consumption - Strong understanding of distributed systems, including concepts like clustering, asynchronous messaging, scalability & performance, data consistency, and high availability - Experience with distributed messaging systems like Kafka/Confluent, Kinesis, or Google Pub/Sub - Proficiency in databases (relational, NoSQL, search engines), caching, and distributed persistence technologies. Knowledge of Elastic Search or any time series databases is a plus - Familiarity with cloud-native platforms like Kubernetes and service-mesh technologies such as Istio - Knowledge of network protocols (TCP/IP, HTTP), standard network architectures, and RPC mechanisms (e.g., gRPC) - Understanding of secure coding practices, network security, and application security Join Tarana and be part of shaping the future of wireless connectivity. About Us: Tarana's mission is to accelerate the deployment of fast, affordable internet access globally. With over a decade of R&D and significant investment, the Tarana team has developed a unique next-generation fixed wireless access (ngFWA) technology, embodied in its initial commercial platform, Gigabit 1 (G1). G1 represents a significant advancement in broadband economics for both mainstream and underserved markets, utilizing licensed or unlicensed spectrum. Since its production launch in mid-2021, G1 has been adopted by over 250 service providers in 19 countries and 41 US states. Headquartered in Milpitas, California, Tarana also conducts research and development in Pune, India. Visit our website to learn more about G1.,

Posted 3 weeks ago

Apply

5.0 - 7.0 years

13 - 18 Lacs

hyderabad, bengaluru

Work from Office

• Help build the backend services that power the playback exp. for millions of subscribers around the world • Collaborate with other software engineers & product teams to ensure successful implementation of software solutions to meet our primary goal Required Candidate profile • You have 5+ Yrs. of exp. crafting software solutions with a track record for developing solutions used globally by millions of users. You have expertise in video streaming & DRM technologies.

Posted 3 weeks ago

Apply

7.0 - 9.0 years

15 - 20 Lacs

bengaluru

Hybrid

Hiring Scala Data Engineer (7+ yrs, Bangalore Hybrid) – Must have Scala, Spark, SQL, DS & Algo, Cloud (Azure). Preferred: Kafka, Hadoop, CI/CD, Airflow, Iceberg, NoSQL, Data Governance. Required Candidate profile Scala Data Engineer (7+ yrs) with expertise in Scala, Spark, SQL, DS & Algo, Azure. Skilled in Kafka, Hadoop, CI/CD, Airflow, NoSQL, Data Governance, building scalable & reliable pipelines.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

noida

Hybrid

Job summary: The ideal candidate would be highly skilled and experienced QA Engineer with a strong background in testing Over-the-Top (OTT) applications across multiple platforms, including Roku, Amazon Fire TV Stick, Android, iOS, tvOS, macOS, and Windows. The ideal candidate will have a proven track record of defining test strategies, test plans, and work estimation within agile development environments. You will be responsible for executing and automating comprehensive end-to-end testing methodologies, ensuring high-quality deliverables for OTT development projects. Responsibilities Help define and execute comprehensive Quality test plans for OTT video streaming applications and services Build & implement technical solutions to the QE challenges faced during SDLC phases Challenging Status-Quo and actively thinking about bringing more efficiencies to the processes and delivery approach. Bringing fresh and innovative approaches to software test management and delivery, up-skill people, effectively engage with stakeholders and drive continuous improvement. Responsible for end to end test delivery which include but not limited to Test Strategy, Planning, Estimation, Execution & Reporting for one or more projects. Implement automated testing solutions and developing light-weight, scalable, and highly modularized automation frameworks using tools like Selenium/Webdriver, Roku WebDriver and Appium etc. Act as a Quality GateKeeper and help/suggest Dev counterparts on implementing best processes/tools. Driving consistency and best practices by leveraging collective intelligence through cross-functional knowledge sharing across the teams. Requirements: B.Tech/ MCA with 4+ years of overall experience in testing Experience with defining test strategy, test plans, work estimation for OTT development projects, well-versed with end-to-end testing methodologies Insightful experience with all phases of software development life cycle, particularly for OTT development with agile development model and risk-based testing. Testing experience on Roku, Amazon Fire Tv Stick, Android, iOS, tvOS, MAC, Windows. Stronghold in implementing automated testing solutions and developing light-weight, scalable, and highly modularized automation frameworks using tools like Selenium/Webdriver, Roku WebDriver, and Appium etc. Experience with automated end-to-end testing using open source test tools like Selenium/Webdriver, Roku WebDriver and Appium Experience in using network tools like Charles Proxy, Wireshark, or some equivalent tool Experience in Functional, Integration, Smoke, Sanity, Regression, Performance, Compatibility, API, and Ad-hoc Testing. Excellent problem solving, out-of-the-box thinking, and analytical skills

Posted 3 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. As a Graduate Trainee Engineer, you will have the opportunity to contribute significantly by: - Designing, developing, and optimizing NLP-driven AI solutions using cutting-edge models and techniques such as NER, embeddings, and summarization. - Building and operationalizing RAG pipelines and agentic workflows to facilitate intelligent, context-aware applications. - Fine-tuning, prompt-engineering, and deploying LLMs (such as OpenAI, Anthropic, Falcon, LLaMA, etc.) for specific domain use cases. - Collaborating with data scientists, backend developers, and cloud architects to construct scalable AI-first systems. - Evaluating and integrating third-party models/APIs and open-source libraries for generative use cases. - Continuously monitoring and enhancing model performance, latency, and accuracy in production environments. - Implementing observability, performance monitoring, and explainability features in deployed models. - Ensuring that solutions meet enterprise-level criteria for reliability, traceability, and maintainability. To excel in this role, you should possess: - A Master's or Bachelor's degree in Computer Science, Machine Learning, AI, or a related field. - Exposure to AI/ML, with expertise in NLP and Generative AI. - A solid understanding of LLM architectures, fine-tuning methods (such as LoRA, PEFT), embeddings, and vector search. - Previous experience in designing and deploying RAG pipelines and collaborating with multi-step agent architectures. - Proficiency in Python and frameworks like Lang Chain, Transformers (Hugging Face), Llama Index, Smol Agents, etc. - Familiarity with ML observability and explainability tools (e.g., Tru Era, Arize, Why Labs). - Knowledge of cloud-based ML services like AWS Sagemaker, AWS Bedrock, Azure OpenAI Service, Azure ML Studio, and Azure AI Foundry. - Hands-on experience in integrating LLM-based agents in production settings. - An understanding of real-time NLP challenges (streaming, latency optimization, multi-turn dialogues). - Familiarity with Lang Graph, function calling, and tools for orchestration in agent-based systems. - Exposure to infrastructure-as-code (Terraform/CDK) and DevOps for AI pipelines. - Domain knowledge in Electrification, Energy, or Industrial AI would be advantageous. Join us in Bangalore and be part of a team that is shaping the future of entire cities, countries, and beyond. At Siemens, we are a diverse community of over 312,000 minds working together to build a better tomorrow. We value equality and encourage applications from individuals who reflect the diversity of the communities we serve. Our employment decisions are based on qualifications, merit, and business requirements. Bring your curiosity and creativity to Siemens and be a part of shaping tomorrow with us. Explore more about Siemens careers at www.siemens.com/careers and discover the digital world of Siemens at www.siemens.com/careers/digitalminds.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a skilled Architect & Designer, you will be responsible for developing end-to-end architecture blueprints for large-scale enterprise applications. You will define component-based and service-oriented architectures such as Microservices, SOA, and Event-Driven designs. Your expertise will be crucial in creating API-first designs using REST, GraphQL, and gRPC with clear versioning strategies. Additionally, you will establish integration patterns for internal systems, third-party APIs, and middleware while designing cloud-native architectures leveraging AWS, Azure, or GCP services. Your role will also involve defining coding guidelines, performance benchmarks, and security protocols, and participating in POC projects to evaluate new tools and frameworks. In terms of Performance, Security, & Scalability, you will be tasked with implementing caching strategies like Redis, Memcached, and CDN integrations. Ensuring the horizontal and vertical scalability of applications will be a key responsibility, along with applying security best practices such as OAuth 2.0, JWT, SAML, encryption (TLS/SSL, AES), input validation, and secure API gateways. Setting up application monitoring and logging using ELK, Prometheus, Grafana, or equivalent tools will also fall under your purview. Your expertise in DevOps & Delivery will be vital as you define CI/CD workflows using Jenkins, GitHub Actions, Azure DevOps, or GitLab CI. Collaborating with DevOps teams for container orchestration using Docker and Kubernetes and integrating automated testing pipelines including unit, integration, and load testing will be essential for successful project delivery. Your proficiency in various technical skills and frameworks such as Microservices, Domain-Driven Design (DDD), Event-Driven Architecture (EDA), front-end technologies like Angular and React, and message brokers like Kafka, RabbitMQ, Azure Event Hub, and Azure Service Bus will be highly valued. Knowledge of databases like PostgreSQL, MySQL, MS SQL Server, MongoDB, caching layers like Redis and Memcached, as well as cloud services and infrastructure like Azure App Services, Functions, API Management, and Cosmos DB will be crucial for the role. Ensuring security through OAuth 2.0, SAML, OpenID Connect, JWT, secure coding practices, threat modeling, and penetration testing familiarity will be part of your responsibilities. Proficiency in DevOps tools like Azure DevOps, GitLab CI/CD, Docker, Kubernetes, testing frameworks like JUnit, NUnit, PyTest, Mocha, performance/load testing tools such as JMeter, Locust, and monitoring & observability tools like Azure Monitoring, App Insight, Prometheus, and Grafana will be essential for success in this role. Preferred skills and certifications such as being a Microsoft Certified Azure Solutions Architect Expert, exposure to AI/ML services, and IoT architectures will be advantageous. Key Performance Indicators for success include reducing system downtime through robust architecture designs, improving performance metrics, scalability readiness, successful delivery of complex projects without major architectural rework, and increasing developer productivity through better standards and tools adoption.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are a highly skilled and experienced Media-Tech professional sought to join Amagi's dynamic Onboarding team as the Head of Technical Onboarding. Your role will involve possessing deep expertise in Amagi's products such as CLOUDPORT, Tarang, Amagi Live, and video streaming and delivery technologies like AWS Medialive, Mediaconnect, CDN, streaming, and video delivery, to facilitate seamless customer onboarding experiences. Your responsibilities will include leading and managing the technical onboarding team, providing guidance, mentorship, and support to ensure an efficient onboarding process. You will also be responsible for managing escalations and risks to optimize customer satisfaction, leading collaboration with internal stakeholders, and developing strategic initiatives to enhance technical onboarding workflows. As a Media Technology Expert, you are expected to stay updated on the latest trends and advancements in media technology, collaborate with internal teams to integrate emerging technologies into the onboarding process, and maintain client relationships by acting as a technical liaison between clients and internal teams. Moreover, you will be tasked with optimizing the onboarding process, refining workflows, reducing implementation timelines, and enhancing overall customer satisfaction. Designing and delivering comprehensive training programs for clients, fostering a culture of continuous learning within the technical onboarding team, and implementing quality assurance measures throughout the onboarding process will also be part of your responsibilities. The ideal candidate for this role should have at least 12 years of experience in the IT sector, with a minimum of 8 years in a customer delivery role and some experience in people management. You should have proven leadership experience in overseeing technical onboarding in the media technology or streaming industry, a deep understanding of media technology, excellent communication skills, and the ability to manage high-pressure situations effectively. Strong project management skills and the ability to lead cross-functional teams are also essential for this position.,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

kochi, kerala

On-site

As the Engineering Team Lead, you will play a crucial role in guiding a cross-functional team to deliver high-quality technical solutions for our strategic data products. Leading by example, you will be involved in designing solutions, writing code, mentoring engineers, and collaborating with architects and stakeholders to drive the team forward. In this hands-on leadership position, you will be actively engaged in coding, shaping architecture, refining team processes, and fostering the technical development of your colleagues. Working closely with the Data and Analytics leadership team, including the VP Data and Analytics, architect, data operations lead, and product manager, you will translate business objectives into scalable technical solutions. Your responsibilities will include leading the technical implementation of projects within the Data and Analytics Engineering team, from initial design to deployment. Collaborating with the architect and operations lead, you will define scalable, secure, and maintainable system designs. By breaking down complex problems and facilitating practical solutions, you will drive the team towards success. Championing software engineering best practices such as code quality, testing, observability, and automation will be a key aspect of your role. Additionally, you will mentor and support team members through code reviews, design sessions, and informal coaching. Evaluating new technologies, frameworks, and patterns will be essential, guiding their adoption as needed. Creating a culture of inclusion, transparency, and continuous improvement will be a priority for you as you lead by example in fostering collaboration and growth within the team. Your strong technical background, with experience in software design, distributed systems, and cloud architectures, will be instrumental in guiding the team towards excellence. Required Skills: - Proven experience as a senior engineer or technical lead, demonstrating a history of delivering enterprise-grade software solutions with over 12 years of experience in the Software Industry. - Deep understanding of software design, distributed systems, and cloud architectures. - Experience in leading cross-functional teams and mentoring engineers. - Ability to evaluate trade-offs and drive consensus in architectural decisions. - Strong communication skills to convey complex ideas clearly to both technical and non-technical audiences. - Proficiency in working in agile, collaborative environments with a team-oriented mindset. Desired Skills: - Experience in fintech, especially related to broker-dealer systems, trading data, or regulatory compliance. - Hands-on experience with modern data platforms or data engineering systems. - Familiarity with technologies such as C#, AWS, Temporal.io, AWS Glue, GraphQL, or Blazor. - Demonstrable expertise in data modeling and data architecture, including data warehousing, data lakes, ETL, streaming, and data APIs. We are proud to be an Equal Opportunity Employer, committed to fostering a diverse and inclusive workplace.,

Posted 1 month ago

Apply

15.0 - 19.0 years

0 Lacs

karnataka

On-site

As the Architectural Vision and Strategy Leader at Novo Nordisk Global Business Services (GBS) India within the Commercial DD&IT GBS department, you will play a crucial role in defining and communicating the architectural vision and strategy across the organization. Your responsibilities will include establishing a shared technical and architectural vision to ensure alignment with business objectives, providing clear guidance on solution intent, roadmaps, and enabler capabilities, and ensuring that the architectural runway supports both current and future business opportunities. In this role, you will be expected to implement governance frameworks and processes to ensure compliance and standardization, develop and maintain roadmaps for IT applications, platforms, and data to ensure scalability, integration, and reusability. You will also need to anticipate future business needs and align architecture to leverage emerging technologies, promoting the adoption of cloud, integration, and scalable solutions. Additionally, you will lead and manage a team of senior professionals and first-line managers, fostering collaboration and effective resource utilization. To qualify for this position, you should hold a Bachelors or Masters degree in computer science, Information Technology, or a related field. Advanced certifications in IT architecture or cloud technologies are considered a plus. You should have a minimum of 15+ years of experience in IT architecture, including leadership roles managing senior professionals or teams. Your expertise should include designing scalable IT systems, cloud infrastructure, and integration solutions, along with a strong track record in aligning IT strategies with business goals, IT governance, and project management. Experience with technology such as Veeva, AEM, or Salesforce is required, particularly with Salesforce customization using apex classes/triggers, VF pages, Lightning, as well as Integration technologies like SOAP or REST. Agile delivery experience is preferred, and you should possess strong leadership and people management skills, along with excellent communication, negotiation, stakeholder leadership, and conflict resolution skills. Analytical and problem-solving skills with a focus on continuous improvement are essential, as well as the ability to build and sustain trusted relationships both internally and externally. The Commercial DD&IT GBS department at Novo Nordisk is an integral part of the DD&IT GBS unit in Bangalore, India. The team consists of young and passionate IT professionals from diverse backgrounds, catering to the IT system needs of various commercial functions. As a part of Novo Nordisk, a leading global healthcare company with a commitment to defeating serious chronic diseases, you will be contributing to impacting patient lives daily and working towards something bigger than yourself. If you are ready to take on this challenging and rewarding role, we encourage you to apply today for a life-changing career at Novo Nordisk. Join us in our mission to go further and make a difference in the world.,

Posted 1 month ago

Apply

10.0 - 18.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a Lead - OTT for Jio Media Application Operation at Reliance Corporate Park in Navi Mumbai, your main responsibility will be to provide day-to-day Application Operation Support for various OTT platforms such as JioTV, Jio News, Jio CDN, Jio Games, Headend, and associated applications. You should possess in-depth knowledge of OTT platforms, video streaming technologies, and content delivery networks. It is essential to have experience and knowledge of media codecs, formats, transports, and container protocols like MPEG-2, H.264/AVC, AAC, AC3, MP4, TS, etc. Additionally, familiarity with Multicast and Streaming technologies such as HLS, HDS, Microsoft Smooth, DASH, RTMP, and DRM is required. Understanding of Headend Architecture including antennas, IRDS, encoders, switches, and CDN architectures such as caches, origin servers, proxies, etc. will be advantageous. You must have strong networking experience and preferably hold one or more network certifications. Key requirements include specialized skills in running 24x7 Business Critical IT Operations, excellent people management capabilities to handle a team of more than 40 members, strong interpersonal and communication skills, as well as proficiency in presentation, reporting, analytical, and documentation skills. Moreover, hands-on knowledge of Database Technologies like MySQL, Mongo, Cassandra, Elastic Search, Redis, familiarity with Application Monitoring Tools & Configuration, experience with Cloud technologies (GCP, Azure, or AWS), and exposure to DevOps & CI/CD Automation (Azure Devops, Jenkins, Terraform, Ansible, etc.) are desired. A good understanding of DR setups and drills, Governance, Risk, and Controls for compliance and audits, including Vulnerabilities Assessment and closures, is also expected. You will be responsible for ensuring the End to End Understanding of the Application operation environment, maintenance of the Production and Replica Environment, and acting as a single point of contact for Operational Requirements during shifts. As an ideal candidate, you should have an engineering graduate degree with a minimum of 15 to 18 years of IT experience, out of which at least 10-15 years should be in providing Application Operations support. Additionally, a minimum of 5 years of experience in Media/TV/Content Delivery based IT application Operations is required. Your role will involve overseeing application capacity management, ensuring minimal impact of changes, identifying and resolving application-related challenges, managing projects with end-to-end ownership, defining and implementing best practices, maintaining SOPs, enhancing workflows, ensuring application KPI monitoring, and achieving high levels of Application Availability and Security Compliance. Collaboration with various teams such as the Product Team, IT Infrastructure team, Service Management Team, and other integrated application teams will be essential to manage the application operation life cycle effectively in the production environment. Familiarity with ITSM - Service Management Process (Incident, Change, Problem, Defects) is also beneficial for this position.,

Posted 1 month ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies