Home
Jobs

4657 Apache Jobs - Page 14

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Provide support for data production systems in Nielsen Technology International Media (Television and Radio audio measurement) playing a critical role in ensuring the reliability, scalability, and security. Configure, implement, and deploy audience measurement solutions. Provide expert-level support, leading infrastructure automation initiatives, driving continuous improvement across our DevOps practices and supporting Agile processes. Core Technologies: Linux, Airflow, Bash, CI/CD, AWS services (EC2, S3, RDS, EKS, VPC), PostgreSQL, Python, Kubernetes. Responsibilities: Architect, manage, and optimize scalable and secure cloud infrastructure (AWS) using Infrastructure as Code (Terraform, CloudFormation, Ansible). Implement and maintain robust CI/CD pipelines to streamline software deployment and infrastructure changes. Identify and implement cost-optimization strategies for cloud resources. Ensure the smooth operation of production systems across 30+ countries, providing expert-level troubleshooting and incident response. Manage cloud-related migration changes and updates, supporting the secure implementation of changes/fixes. Participate in 24/7 on-call rotation for emergency support. Key Skills: Proficiency in Linux OS, particularly Fedora and Debian-based distributions (AlmaLinux, Amazon Linux, Ubuntu). Strong proficiency in scripting languages (Bash, Python) and SQL. Knowledge of scripting languages (Bash, SQL). Versed in leveraging Automations / DevOps principles with understanding of CI/CD concepts Working knowledge of infrastructure-as-code tools like Terraform, CloudFormation, and Ansible. Solid experience with AWS core services (EC2, EKS, S3, RDS, VPC, IAM, Security Groups). Hands-on experience with Docker, Kubernetes for containerized workloads. Solid understanding of DevOps practices, including monitoring, security, and high-availability design. Hands-on experience with Apache Airflow for workflow automation and scheduling. Strong troubleshooting skills, with experience in resolving issues and handling incidents in production environments. Foundational understanding of modern networking principles and cloud network architectures. Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Linkedin logo

Requirements: 7+ years of hands-on Python development experience Proven experience designing and leading scalable backend systems Expert knowledge of Python and at least one framework (e.g., Django, Flask) Familiarity with ORM libraries and server-side templating (Jinja2, Mako, etc.) Strong understanding of multi-threading, multi-process, and event-driven programming Proficient in user authentication, authorization, and security compliance Skilled in frontend basics: JavaScript, HTML5, CSS3 Experience designing and implementing scalable backend architectures and microservices Ability to integrate multiple databases, data sources, and third-party services Proficient with version control systems (Git) Experience with deployment pipelines, server environment setup, and configuration Ability to implement and configure queueing systems like RabbitMQ or Apache Kafka Write clean, reusable, testable code with strong unit test coverage Deep debugging skills and secure coding practices ensuring accessibility and data protection compliance Optimize application performance for various platforms (web, mobile) Collaborate effectively with frontend developers, designers, and cross-functional teams Lead deployment, configuration, and server environment efforts Show more Show less

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Udaipur, Rajasthan, India

Remote

Linkedin logo

At GKM IT , we’re passionate about building seamless digital experiences powered by robust and intelligent data systems. We’re on the lookout for a Data Engineer - Senior II to architect and maintain high-performance data platforms that fuel decision-making and innovation. If you enjoy designing scalable pipelines, optimising data systems, and leading with technical excellence, you’ll thrive in our fast-paced, outcome-driven culture. You’ll take ownership of building reliable, secure, and scalable data infrastructure—from streaming pipelines to data lakes. Working closely with engineers, analysts, and business teams, you’ll ensure that data is not just available, but meaningful and impactful across the organization. Requirements 5 to 7 years of experience in data engineering Architect and maintain scalable, secure, and reliable data platforms and pipelines Design and implement data lake/data warehouse solutions such as Redshift, BigQuery, Snowflake, or Delta Lake Build real-time and batch data pipelines using tools like Apache Airflow, Kafka, Spark, and DBT Ensure data governance, lineage, quality, and observability Collaborate with stakeholders to define data strategies, architecture, and KPIs Lead code reviews and enforce best practices Mentor junior and mid-level engineers Optimize query performance, data storage, and infrastructure Integrate CI/CD workflows for data deployment and automated testing Evaluate and implement new tools and technologies as required Demonstrate expert-level proficiency in Python and SQL Possess deep knowledge of distributed systems and data processing frameworks Be proficient in cloud platforms (AWS, GCP, or Azure), containerization, and CI/CD processes Have experience with streaming platforms like Kafka or Kinesis and orchestration tools Be highly skilled with Airflow, DBT, and data warehouse performance tuning Exhibit strong leadership, communication, and mentoring skills Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated. Show more Show less

Posted 2 days ago

Apply

3.0 - 5.0 years

17 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Designation Computer Vision Scientist (Geospatial Analysis) Location - Bangalore Position - Full-Time (No Hybrid) Salary – Competitive Company QueNext About Us AI ML Startup set up in 2015 which is in the domain of power, banking, agriculture sector etc. and working directly with Karnataka Govt to offer insights and decisions. Company website: - https://quenext.com/ About Us AI ML Startup set up in 2015 which is in the domain of power, banking, agriculture sector etc. and working directly with Karnataka Govt to offer insights and decisions. You will be part of the team that is delivering large transformational projects to energy utilities, banks and government bodies utilizing our in-house patented AI driven products. You will be working on cutting edge geospatial technologies with a steep learning curve and working along-side the founders from Indian Statistical Institute (ISI) and INSEAD and a team of data scientists and programmers in a collaborative and innovative environment. You should have a strong understanding and interest in remote sensing and coding. A superior academic record at a leading university in Computer Science, Data Science and Technology, Geoinformatics, Mathematics, Statistics, or a related field or equivalent work experience is preferable. Job Description: We are seeking a highly skilled and innovative Computer Vision Scientist to join us team. The ideal candidate will have expertise in geographic information systems (GIS), hands-on experience with Apache Kafka, TensorFlow, and machine learning (ML) pipelines, and a strong background in computer vision. You will bring your familiarity and experience with satellite and aerial imagery, geospatial APIs and libraries, knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) and Understanding of Big Data frameworks (e.g., Spark, Hadoop) in geospatial contexts. This role is ideal for someone passionate about developing advanced geospatial applications, integrating cutting-edge technologies, and solving complex spatial data challenges. Key Responsibilities: GIS Development: • Design, develop, and implement GIS-based applications and services. • Create, manipulate, and analyze geospatial data • Integrate geospatial data into larger software systems Machine Learning Pipelines: • Build, optimize, and deploy ML pipelines for geospatial and computer vision tasks • Leverage TensorFlow to create models for spatial analysis, object detection, and image classification • Implement ML workflows from data ingestion to deployment and monitoring Kafka Integration: • Develop real-time data streaming and processing workflows using Apache Kafka • Design event-driven systems for geospatial and computer vision applications • Ensure scalability, reliability, and efficiency in Kafka-based pipelines Computer Vision Applications: • Apply computer vision techniques to geospatial data, satellite imagery, and aerial photography • Develop and deploy models for tasks like feature extraction, land-use classification, and object recognition • Stay updated on advancements in CV to enhance project outcomes Collaboration and Documentation: • Collaborate with cross-functional teams, including data scientists, software engineers, and GIS analysts • Document workflows, processes, and technical details for easy replication and scalability • Provide technical support and troubleshooting for GIS and ML-related challenges Technical Skills: • Proficiency in GIS tools • Strong expertise in Apache Kafka for real-time data streaming. • Experience with TensorFlow, Keras, or PyTorch for ML model development • Knowledge of machine learning pipelines and tools (e.g., Kubeflow, Airflow). • Hands-on experience with computer vision techniques and libraries (e.g., OpenCV, TensorFlow Object Detection API). • Strong programming skills in Python, Java, or C++. • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) for ML and GIS deployment. • Knowledge of geospatial data formats (e.g., GeoJSON, Shapefiles, Raster). s INTERESTED CANDIDATES CAN SHARE THEIR UPDATED CV ON nawaz@stellaspire.com

Posted 2 days ago

Apply

0.0 - 5.0 years

0 Lacs

Kochi, Kerala

On-site

Indeed logo

Job Description Highly skilled Laravel developer with a minimum of 4-5 year of Laravel experience well-versed with current web technologies and use of cutting-edge tools and 3rd party API's. Strong knowledge of PHP, MySQL, HTML, CSS, JavaScript, and MVC architecture Most important thing should have experience with custom e-commerce website Familiarity with modern JavaScript frameworks like Vue.js, React, or Angular Responsibilities & Duties Design, develop, test, deploy and support new software solutions and changes to existing software solutions. Translate Business Requirements into components of complex, loosely-coupled, distributed systems. Responsible for creating REST based web services and APIs for consumption by mobile and web platforms. Responsible systems analysis, code creation, testing, build/release and technical support. Responsible for keeping excellent, organized project records and documentation. You strive for innovative solutions, quality code with on time delivery. Manages multiple projects with timely deadlines. Required Experience, Skills and Qualifications: Working experience in Laravel Framework, at least done few project in Laravel or minimum 3-4 year of Laravel development experience. Working knowledge of HTML5, CSS3, and AJAX/ JavaScript, jQuery or similar libraries. Experience in application development in the LAMP stack (Linux, Apache, MySQL, and PHP) environment. Good working knowledge of object-oriented PHP (OOPs) & MVC frameworks. Must know Laravel coding standards and best practices. Must have working experience with Web service technologies such as REST, JSON etc., and writing REST APIs for consumption by mobile and web platforms. Working knowledge of GIT version control. Exposure to Responsive Web design. Strong unit testing and debugging skills. Good experience with databases (MySQL) and query writing. Excellent teamwork and problem-solving skills, flexibility, and ability to handle multiple tasks. Hands-on experience with project management tools like Desk log, Jira, or Asana Understanding of server-side security, performance optimization, and cross-browser compatibility Experience deploying applications on cloud platforms (AWS, Azure, or similar) is a plus How to Apply: Interested candidates are invited to submit their resume and a cover letter detailing your relevant experience and achievements to hr.kochi@mightywarner.com . Please include “ Laravel Developer” in the subject line. Job Type: Full-time Pay: ₹35,000.00 - ₹45,000.00 per month Benefits: Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Kochi, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join immediately ? Custom Development experience do you have ? Do you have E-commerce Developing experience ? Experience: Laravel Developer: 5 years (Preferred) Work Location: In person Expected Start Date: 22/06/2025

Posted 2 days ago

Apply

4.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. Zafin is privately owned and operates out of multiple global locations including North America, Europe, and Australia. Zafin is backed by significant financial partners committed to accelerating the company's growth, fueling innovation and ensuring Zafin's enduring value as a leading provider of cloud-based solutions to the financial services sector. Zafin is proud to be recognized as a top employer. In Canada, UK and India, we are certified as a "Great Place to Work". The Great Place to Work program recognizes employers who invest in and value their people and organizational culture. The company's culture is driven by strategy and focused on execution. We make and keep our commitments. What is the opportunity? This role is at the intersection of banking and analytics. It requires diving deep into the banking domain to understand and define the metrics, and into the technical domain to implement and present the metrics through business intelligence tools. We're building a next-generation analytics product to help banks maximize the financial wellness of their clients. The product is ambitious - that's why we're looking for a team member who is laterally skilled and comfortable with ambiguity. Reporting to the Senior Vice President, Analytics as part of the Zafin Product Team, you are a data-visualization subject matter expert who can define and implement the insights to be embedded in the product using data visualization tools (DataViz) and applying analytics expertise to make an impact. If storytelling with data is a passion of yours, and data visualization and analytics expertise is what has enabled you to reach your current level in your career - you should take a look at how we do it on one of the most advanced banking technology products in the market today - connect with us to learn more. Location – Chennai or Trivandrum India Purpose of the Role As a Software Engineer – APIs & Data Services, you will own the "last mile" that transforms data pipelines into polished, product-ready APIs and lightweight microservices. Working alongside data engineers and product managers, you will deliver features that power capabilities like Dynamic Cohorts, Signals, and our GPT-powered release notes assistant. What You'll Build & Run Approximate Focus: 60% API / 40% Data Focus Area Typical Work Product-Facing APIs Design REST/GraphQL endpoints for cohort, feature-flag, and release-notes data. Build microservices in Java/Kotlin (Spring Boot or Vert.x) or Python (FastAPI) with production-grade SLAs. Schema & Contract Management Manage JSON/Avro/Protobuf schemas, generate client SDKs, and enforce compatibility through CI/CD pipelines. Data-Ops Integration Interface with Delta Lake tables in Databricks using Spark/JDBC. Transform datasets with PySpark or Spark SQL and surface them via APIs. Pipeline Stewardship Extend Airflow 2.x DAGs (Python), orchestrate upstream Spark jobs, and manage downstream triggers. Develop custom operators as needed. DevOps & Quality Manage GitHub Actions, Docker containers, Kubernetes manifests, and Datadog dashboards to ensure service reliability. LLM & AI Features Enable prompt engineering and embeddings exposure via APIs; experiment with tools like OpenAI, LangChain, or LangChain4j to support product innovation. About You You're a language-flexible engineer with a solid grasp of system design and the discipline to ship robust, well-documented, and observable software. You're curious, driven, and passionate about building infrastructure that scales with evolving product needs. Mandatory Skills 4 to 6 years of professional experience in Java (11+) and Spring Boot Solid command of API design principles (REST, OpenAPI, GraphQL) Proficiency in SQL databases Experience with Docker, Git, and JUnit Hands-on knowledge of low-level design (LLD) and system design fundamentals Highly Preferred / Optional Skills Working experience with Apache Airflow Familiarity with cloud deployment (e.g., Azure AKS, GCP, AWS) Exposure to Kubernetes and microservice orchestration Frontend/UI experience in any modern framework (e.g., React, Angular) Experience with Python (FastAPI, Flask) Good-to-Have Skills CI/CD pipeline development using GitHub Actions Familiarity with code reviews, HLD, and architectural discussions Experience integrating with LLM APIs like OpenAI and building prompt-based systems Exposure to schema validation tools such as Pydantic, Jackson, Protobuf Monitoring and alerting with Datadog, Prometheus, or equivalent What's in it for you Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement. Want to learn more about what you can look forward to during your career with us? Visit our careers site and our openings: zafin.com/careers Zafin welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process. Zafin is committed to protecting the privacy and security of the personal information collected from all applicants throughout the recruitment process. The methods by which Zafin contains uses, stores, handles, retains, or discloses applicant information can be accessed by reviewing Zafin's privacy policy at https://zafin.com/privacy-notice/. By submitting a job application, you confirm that you agree to the processing of your personal data by Zafin described in the candidate privacy notice. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members. Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

0 Lacs

Cochin

On-site

GlassDoor logo

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Data Engineer Locations- Kochi/Chennai/Coimbatore/Mumbai/Pune/Hyderabad Job Overview : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. The ideal candidate will have deep expertise in Azure Databricks and Python, and experience building scalable data pipelines. Familiarity with Data Fabric architectures is a plus. You'll work closely with data scientists, analysts, and business stakeholders to deliver robust data solutions that drive insights and innovation. Key Responsibilities: Design, build, and maintain large-scale, distributed data pipelines using Azure Databricks and Py Spark. Design, build, and maintain large-scale, distributed data pipelines using Azure Data Factory Develop and optimize data workflows and ETL processes in Azure Cloud environments. Write clean, maintainable, and efficient code in Python for data engineering tasks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. • Monitor and troubleshoot data pipelines for performance and reliability issues. • Implement data quality checks, validations, and ensure data lineage and governance. Contribute to the design and implementation of a Data Fabric architecture (desirable). Required Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5–10 years of experience in data engineering or related roles. • Expertise in Azure Databricks, Delta Lake, and Spark. • Strong proficiency in Python, especially in a data processing context. Experience with Azure Data Lake, Azure Data Factory, and related Azure services. Hands-on experience in building data ingestion and transformation pipelines. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Good to Have: Experience or understanding of Data Fabric concepts (e.g., data virtualization, unified data access, metadata-driven architectures). • Knowledge of modern data warehousing and lakehouse principles. • Exposure to tools like Apache Airflow, dbt, or similar. Experience working in agile/scrum environments. DP-500 and DP-600 Certifications What We Offer: Competitive salary and performance-based bonuses. Flexible work arrangements. Opportunities for continuous learning and career growth. A collaborative, inclusive, and innovative work culture. www.orioninc.com (21) Orion Innovation: Company Page Admin | LinkedIn Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC and its subsidiaries and its affiliates (collectively, "Orion," "we" or "us") are committed to protecting your privacy. This Candidate Privacy Policy (orioninc.com) ("Notice") explains: What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About CuringBusy: CuringBusy is a Fully Remote company , providing subscription-based, remote Executive Assistant services to busy Entrepreneurs, Business owners, and Professionals across the globe. We help entrepreneurs free up their time by outsourcing their everyday, routine admin work like calendar management, email, customer service, and marketing tasks like social media, digital marketing, website management, etc. Job Role : The Digital Marketing Specialist is responsible for developing, implementing, and managing website and marketing strategies that promote products and services across multiple digital channels. This includes creating campaigns and driving digital marketing initiatives on search engine marketing, email marketing, display advertising, website creation & optimization, paid social media, email, and mobile marketing. This role will develop the digital marketing plan and coordinate with the sales, product, content, and other teams to ensure the successful execution of the campaigns . Responsibilities: ● Develop effective digital marketing plans to drive our products/services awareness that align with the company's business needs. ● Website development on WordPress. ● Manage the Search Engine Marketing (SEM), Display Advertising, Website Optimization & Conversion Rate Optimization efforts. ● Lead paid social media strategies & campaigns (LinkedIn, Facebook & Instagram) and identify opportunities to leverage emerging platforms. ● Manage email campaigns including segmentation strategies & automation pieces. ● Provide reporting on the various online performance KPIs such as CTRs, CPMs & CPCs. ● Design, build, and maintain our social media presence. ● Design, and manage Social media and digital marketing Advertising campaigns and implement social media strategy to align with business goals. ● Measures and reports the performance of all digital marketing campaigns and assesses against goals (ROI and KPIs). ● Utilizes strong analytical ability to evaluate end-to-end customer experience across multiple channels and customer touchpoints. Job Qualifications and Skill Sets: ● Bachelor’s or master’s degree in Digital Marketing. ● Demonstrable 3+ years of experience leading and managing SEO/SEM, marketing database, email, social media, and display advertising campaigns. ● Highly creative with experience in identifying target audiences and devising digital campaigns that engage, inform, and motivate ● Experience in optimizing landing pages and user funnels. ● Proficiency in graphic design software including Adobe Photoshop, Adobe Illustrator, and other visual design tools. ● Knowledge of both front-end and back-end languages. ● Familiarity with databases (e.g. MySQL, MongoDB), web servers (e.g. Apache), and UI/UX design ● Solid knowledge of website and marketing analytics tools (e.g., Google Analytics, NetInsight, Omniture, WebTrends, SEMRush, etc.) ● Experienced in any of the Website Platforms: WordPress, Wix, Shopify, WooCommerce, PrestaShop, and Squarespace. ● Experience with advertisement tools (e.g., Google Ads, Facebook Ads, Bing Ads, Instagram Ads, YouTube ads, etc.) ● Knowledge of Software like Mailerlite, Mailchimp, Sendinblue, Sender, Hubspot email marketing, Omnisend, Sendpulse, Mailjet, Moosend, etc. ● Proficient in marketing research and statistical analysis. Your Benefits ● Work from Home Job/Completely Remote. ● Opportunity to grow with a Fast-Growing Startup. ● Exposure to International Clients. Work Timings: Evening Shift or Night Shift 3 pm-12 am/6 pm-3 am ( Monday- Friday) Salary: Based on company standards and skill sets. Job Type: Full-time Pay: As per Industry Standards Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

We are looking for skilled ETL pipeline support engineer to join DevOps team. In this role, you will be ensuring the smooth operation of PROD ETL pipelines. Also responsible for monitoring, troubleshooting existing pipelines. This role requires a strong understanding of SQL, Spark, and experience with AWS Glue and Redshift . Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in supporting and maintaining ETL pipelines. Strong proficiency in SQL and experience with relational databases (e.g., Redshift). Solid understanding of distributed computing concepts and experience with Apache Spark. Hands-on experience with AWS Glue and other AWS data services (e.g., S3, Lambda). Experience with data warehousing concepts and best practices. Excellent problem-solving, analytical skills and strong communication and collaboration skills. Ability to work independently and as part of a team. Preferred Skills and Experience: Experience with other ETL tools and technologies Experience with scripting languages (e.g., Python). Familiarity with Agile development methodologies. Experience with data visualization tools (e.g., Tableau, Power BI). Roles & Responsibilities Monitor and maintain existing ETL pipelines, ensuring data quality and availability. identify and resolve pipeline issues and data errors. Troubleshoot data integration processes. If needed, collaborate with data engineers and other stakeholders to resolve complex issues Develop and maintain necessary documentation for ETL processes and pipelines. Participate in on-call rotation for production support. Show more Show less

Posted 2 days ago

Apply

1.0 years

11 - 13 Lacs

Hyderābād

Remote

GlassDoor logo

Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person

Posted 2 days ago

Apply

3.0 - 7.0 years

7 - 16 Lacs

Hyderābād

On-site

GlassDoor logo

AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

GlassDoor logo

Job Title: PHP Intern (Full Stack Preferred) Location: Laxmi Nagar Employment Type: Internship / Entry-Level Experience: Freshers / Interns Job Description We are looking for a highly motivated PHP Intern with a strong foundational knowledge of website design and development, a creative mindset, and a willingness to learn and grow in a fast-paced environment. As part of our cross-functional development team, you will assist in building scalable software solutions and contribute across all stages of the software development life cycle — from ideation to deployment. Full Stack developers will be given preference. Freshers and interns with a strong learning attitude and technical base are encouraged to apply. Key Responsibilities Assist in the creation and implementation of various web-based applications and platforms. Work on development tasks using Core PHP, LAMP stack, WordPress, Magento, and other CMSs. Support integration of third-party APIs and external systems. Help design intuitive, user-friendly front-end experiences using HTML5, CSS3, JavaScript, jQuery, and AJAX. Work alongside senior developers on Shopify, React, Flutter, and other latest tech stacks. Collaborate on database design and management using MySQL or NoSQL. Participate in DevOps processes and deployment via Nginx, Apache, and AWS. Utilize version control and collaboration through GitHub. Stay current with new technologies and industry trends to improve performance and usability. Preferred Skills & Qualifications Basic experience or academic knowledge in PHP and Full Stack Development. Familiarity with CMS platforms like WordPress, Magento, and Shopify. Understanding of front-end frameworks and responsive design principles. Exposure to cloud services like AWS is a plus. Good analytical, debugging, and problem-solving skills. Job Type: Full-time Pay: ₹5,000.00 per month Schedule: Day shift Work Location: In person Application Deadline: 22/06/2025 Expected Start Date: 22/06/2025

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurgaon

Remote

GlassDoor logo

About Us: At apexanalytix, we’re lifelong innovators! Since the date of our founding nearly four decades ago we’ve been consistently growing, profitable, and delivering the best procure-to-pay solutions to the world. We’re the perfect balance of established company and start-up. You will find a unique home here. And you’ll recognize the names of our clients. Most of them are on The Global 2000. They trust us to give them the latest in controls, audit and analytics software every day. Industry analysts consistently rank us as a top supplier management solution, and you’ll be helping build that reputation. Read more about apexanalytix - https://www.apexanalytix.com/about/ Job Details The Role Quick Take - We are looking for a highly skilled systems engineer with experience working with Virtualization, Linux, Kubernetes, and Server Infrastructure. The engineer will be responsible to design, deploy, and maintain enterprise-grade cloud infrastructure using Apache CloudStack or similar technology, Kubernetes on Linux operating system. The Work - Hypervisor Administration & Engineering Architect, deploy, and manage Apache CloudStack for private and hybrid cloud environments. Manage and optimize KVM or similar virtualization technology Implement high-availability cloud services using redundant networking, storage, and compute. Automate infrastructure provisioning using OpenTofu, Ansible, and API scripting. Troubleshoot and optimize hypervisor networking (virtual routers, isolated networks), storage, and API integrations. Working experience with shared storage technologies like GFS and NFS. Kubernetes & Container Orchestration Deploy and manage Kubernetes clusters in on-premises and hybrid environments. Integrate Cluster API (CAPI) for automated K8s provisioning. Manage Helm, Azure Devops, and ingress (Nginx/Citrix) for application deployment. Implement container security best practices, policy-based access control, and resource optimization. Linux Administration Configure and maintain RedHat HA Clustering (Pacemaker, Corosync) for mission-critical applications. Manage GFS2 shared storage, cluster fencing, and high-availability networking. Ensure seamless failover and data consistency across cluster nodes. Perform Linux OS hardening, security patching, performance tuning, and troubleshooting. Physical Server Maintenance & Hardware Management Perform physical server installation, diagnostics, firmware upgrades, and maintenance. Work with SAN/NAS storage, network switches, and power management in data centers. Implement out-of-band management (IPMI/iLO/DRAC) for remote server monitoring and recovery. • Ensure hardware resilience, failure prediction, and proper capacity planning. Automation, Monitoring & Performance Optimization • Automate infrastructure provisioning, monitoring, and self-healing capabilities. Implement Prometheus, Grafana, and custom scripting via API for proactive monitoring. • Optimize compute, storage, and network performance in large-scale environments. • Implement disaster recovery (DR) and backup solutions for cloud workloads. Collaboration & Documentation • Work closely with DevOps, Enterprise Support, and software Developers to streamline cloud workflows. • Maintain detailed infrastructure documentation, playbooks, and incident reports. Train and mentor junior engineers on CloudStack, Kubernetes, and HA Clustering. The Must-Haves - 5+ years of experience in CloudStack or similar virtualization platform, Kubernetes, and Linux system administration. Strong expertise in Apache CloudStack (4.19+) or similar virtualization platform, KVM hypervisor, and Cluster API (CAPI). Extensive experience in RedHat HA Clustering (Pacemaker, Corosync) and GFS2 shared storage. Proficiency in OpenTofu, Ansible, Bash, Python, and Go for infrastructure automation. Experience with networking (VXLAN, SDN, BGP) and security best practices. Hands-on expertise in physical server maintenance, IPMI/iLO, RAID, and SAN storage. Strong troubleshooting skills in Linux performance tuning, logs, and kernel debugging. Knowledge of monitoring tools (Prometheus, Grafana, Alert manager). Preferred Qualifications • Experience with multi-cloud (AWS, Azure, GCP) or hybrid cloud environments. • Familiarity with CloudStack API customization, plugin development. • Strong background in disaster recovery (DR) and backup solutions for cloud environments. • Understanding of service meshes, ingress, and SSO. • Experience is Cisco UCS platform management. Over the years, we’ve discovered that the most effective and successful associates at apexanalytix are people who have a specific combination of values, skills, and behaviors that we call “The apex Way”. Read more about The apex Way - https://www.apexanalytix.com/careers/ Benefits At apexanalytix we know that our associates are the reason behind our successes. We truly value you as an associate and part of our professional family. Our goal is to offer the very best benefits possible to you and your loved ones. When it comes to benefits, whether for yourself or your family the most important aspect is choice. And we get that. apexanalytix offers competitive benefits for the countries that we serve, in addition to our BeWell@apex initiative that encourages employees’ growth in six key wellness areas: Emotional, Physical, Community, Financial, Social, and Intelligence. With resources such as a strong Mentor Program, Internal Training Portal, plus Education, Tuition, and Certification Assistance, we provide tools for our associates to grow and develop.

Posted 2 days ago

Apply

1.0 years

11 - 13 Lacs

Pune

Remote

GlassDoor logo

Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person

Posted 2 days ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Manager, Quality Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centres focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s’ IT operating model, Tech Centres are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview Develop and Implement Advanced Automated Testing Frameworks Architect, design, and maintain sophisticated automated testing frameworks for data pipelines and ETL processes, ensuring robust data quality and reliability. Conduct Comprehensive Quality Assurance Testing Lead the execution of extensive testing strategies, including functional, regression, performance, and security testing, to validate data accuracy and integrity across the bronze layer. Monitor and Enhance Data Reliability Collaborate with the data engineering team to establish and refine monitoring and alerting systems that proactively identify data quality issues and system failures, implementing corrective actions as needed. What Will You Do In This Role Develop and Implement Advanced Automated Testing Frameworks Architect, design, and maintain sophisticated automated testing frameworks for data pipelines and ETL processes, ensuring robust data quality and reliability. Conduct Comprehensive Quality Assurance Testing Lead the execution of extensive testing strategies, including functional, regression, performance, and security testing, to validate data accuracy and integrity across the bronze layer. Monitor and Enhance Data Reliability Collaborate with the data engineering team to establish and refine monitoring and alerting systems that proactively identify data quality issues and system failures, implementing corrective actions as needed. Leverage Generative AI Innovate and apply generative AI techniques to enhance testing processes, automate complex data validation scenarios, and improve overall data quality assurance workflows. Collaborate with Cross-Functional Teams Serve as a key liaison between Data Engineers, Product Analysts, and other stakeholders to deeply understand data requirements and ensure that testing aligns with strategic business objectives. Document and Standardize Testing Processes Create and maintain comprehensive documentation of testing procedures, results, and best practices, facilitating knowledge sharing and continuous improvement across the organization. Drive Continuous Improvement Initiatives Lead efforts to develop and implement best practices for QA automation and reliability, including conducting code reviews, mentoring junior team members, and optimizing testing processes. What You Should Have Educational Background Bachelor's degree in computer science, Engineering, Information Technology, or a related field Experience 4+ years of experience in QA automation, with a strong focus on data quality and reliability testing in complex data engineering environments. Technical Skills Advanced proficiency in programming languages such as Python, Java, or similar for writing and optimizing automated tests. Extensive experience with testing frameworks and tools (e.g., Selenium, JUnit, pytest) and data validation tools, with a focus on scalability and performance. Deep familiarity with data processing frameworks (e.g., Apache Spark) and data storage solutions (e.g., SQL, NoSQL), including performance tuning and optimization. Strong understanding of generative AI concepts and tools, and their application in enhancing data quality and testing methodologies. Proficiency in using Jira Xray for advanced test management, including creating, executing, and tracking complex test cases and defects. Analytical Skills Exceptional analytical and problem-solving skills, with a proven ability to identify, troubleshoot, and resolve intricate data quality issues effectively. Communication Skills Outstanding verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Preferred Qualifications Experience with Cloud Platforms Extensive familiarity with cloud data services (e.g., AWS, Azure, Google Cloud) and their QA tools, including experience in cloud-based testing environments. Knowledge of Data Governance In-depth understanding of data governance principles and practices, including data lineage, metadata management, and compliance requirements. Experience with CI/CD Pipelines Strong knowledge of continuous integration and continuous deployment (CI/CD) practices and tools (e.g., Jenkins, GitLab CI), with experience in automating testing within CI/CD workflows. Certifications Relevant certifications in QA automation or data engineering (e.g., ISTQB, AWS Certified Data Analytics) are highly regarded. Agile Methodologies Proven experience working in Agile/Scrum environments, with a strong understanding of Agile testing practices and principles. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 08/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345312 Show more Show less

Posted 2 days ago

Apply

0.0 - 1.0 years

0 Lacs

Mumbai

On-site

GlassDoor logo

Job Information Industry IT Services Date Opened 06/16/2025 Job Type Software Engineering Work Experience 0-1 years City Mumbai State/Province Maharashtra Country India Zip/Postal Code 400080 Job Description What we want: We are looking for a Intern DevOps Engineer who should have good experience with Linux and exposure to DevOps Tools. Who we are: Vertoz (NSEI: VERTOZ), an AI-powered MadTech and CloudTech Platform offering Digital Advertising, Marketing and Monetization (MadTech) & Digital Identity, and Cloud Infrastructure (CloudTech) caters to Businesses, Digital Marketers, Advertising Agencies, Digital Publishers, Cloud Providers, and Technology companies. For more details, please visit our website here. What you will do: Linux - be comfortable with the command line. (Preferably on Ubuntu. Completion of a course will be an advantage) Possess knowledge of AWS or equivalent cloud services provider. Virtualization (KVM, VMware, or VirtualBox) Knowledge of networking (OSI, basic troubleshooting, Internet services) Knowledge of web technologies like Redis, Apache Tomcat, or Apache Web server. Should know any SQL-based DB (MySQL MariaDB or PostgreSQL) Must be self-driven and able to follow and execute instructions specified in user guides. Knowledge of Jenkins, Ansible /chef/puppet, git, and docker preferred. Must be able to document activities, procedures, etc. Requirements BE, BSC in CS/IT, ME in CS & MSC in CS/IT Linux (RHCE/RHCSA) Certification is must Mumbai candidates only Willing to work in a 24x7 environment Benefits No dress codes Flexible working hours 5 days working 24 Annual Leaves International Presence Celebrations Team outings

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

QA Manager Date: Jun 16, 2025 Location: Chennai, Tamil Nadu, IN Company: Super Micro Computer Job Req ID: 26743 Includes the following essential duties and responsibilities (other duties may also be assigned): Supermicro seeks qualified QA manager with hands-on experience to create and enforce web-based products. As a QA manager, you will leverage your expert technical knowledge and past implementation experience in developing processes and standards to build our new cloud solution based on the latest industry cloud software development technologies such as LAMP stack (Linux, Apache, Python, Mysql, etc.). You will develop comprehensive test plans, strategies, and schedules for enterprise-scale requirements and lead its initial adoption across various test cases. You’ll be responsible for managing the lab hardware and quality processes. You will need an excellent understanding of infrastructure operations, tools, and patterns used in an agile development continuous delivery environment. Monitor and report using test tools for automated, manual and regression test Knowledgeable of VDBench and Jenkins Skilled in understanding and deploying cloud technologies About Supermicro: Supermicro® is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/ Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are the #5 fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global expansion has provided us with the opportunity to offer a large number of new positions to the technology community. We seek talented, passionate, and committed engineers, technologists, and business leaders to join us. Job Summary: Monitor and report using test tools for automated, manual and regression test Knowledgeable of VDBench and Jenkins Skilled in understanding and deploying cloud technologies Qualifications: Education and/or Experience: BS/MS EE, CE, ME 5+ years of quality assurance expertise Experience with Agile development tools (Redmine, Git) Confident presenter, and strong influencer; able to adapt level and style to the audience EEO Statement Supermicro is an Equal Opportunity Employer and embraces diversity in our employee population. It is the policy of Supermicro to provide equal opportunity to all qualified applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, protected veteran status or special disabled veteran, marital status, pregnancy, genetic information, or any other legally protected status.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Overview: The Technology Solution Delivery - Front Line Manager (M1) is responsible for providing leadership and day-to-day direction to a cross functional engineering team. This role involves establishing and executing operational plans, managing relationships with internal and external customers, and overseeing technical fulfillment projects. The manager also supports sales verticals in customer interactions and ensures the delivery of technology solutions aligns with business needs. What you will do: Build strong relationships with both internal and external stakeholders including product, business and sales partners. Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters. Mentor, coach and develop junior and senior software, quality and reliability engineers. Collaborate with the architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs. Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Drive technical documentation including support, end user documentation and run books. Lead Sprint planning, Sprint Retrospectives, and other team activities Implement architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate Provides coaching, leadership and talent development; ensures teams functions as a high-performing team; able to identify performance gaps and opportunities for upskilling and transition when necessary. Drives culture of accountability through actions and stakeholder engagement and expectation management Develop the long-term technical vision and roadmap within, and often beyond, the scope of your teams. Oversee systems designs within the scope of the broader area, and review product or system development code to solve ambiguous problems Identify and resolve problems affecting day-to-day operations Set priorities for the engineering team and coordinate work activities with other supervisors Cloud Certification Strongly Preferred What experience you need: BS or MS degree in a STEM major or equivalent job experience required 10+ years’ experience in software development and delivery You adore working in a fast paced and agile development environment You possess excellent communication, sharp analytical abilities, and proven design skills You have detailed knowledge of modern software development lifecycles including CI / CD You have the ability to operate across a broad and complex business unit with multiple stakeholders You have an understanding of the key aspects of finance especially as related to Technology. Specifically including total cost of ownership and value You are a self-starter, highly motivated, and have a real passion for actively learning and researching new methods of work and new technology You possess excellent written and verbal communication skills with the ability to communicate with team members at various levels, including business leaders What Could Set You Apart UI development (e.g. HTML, JavaScript, AngularJS, Angular4/5 and Bootstrap) Source code control management systems (e.g. SVN/Git, Subversion) and build tools like Maven Big Data, Postgres, Oracle, MySQL, NoSQL databases (e.g. Cassandra, Hadoop, MongoDB, Neo4J) Design patterns Agile environments (e.g. Scrum, XP) Software development best practices such as TDD (e.g. JUnit), automated testing (e.g. Gauge, Cucumber, FitNesse), continuous integration (e.g. Jenkins, GoCD) Linux command line and shell scripting languages Relational databases (e.g. SQL Server, MySQL) Cloud computing, SaaS (Software as a Service) Atlassian tooling (e.g. JIRA, Confluence, and Bitbucket) Experience working in financial services Experience working with open source frameworks; preferably Spring, though we would also consider Ruby, Apache Struts, Symfony, Django, etc. Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Behaviors: Customer-focused with a drive to exceed expectations. Demonstrates integrity and accountability. Intellectually curious and driven to innovate. Values diversity and fosters collaboration. Results-oriented with a sense of urgency and agility. Show more Show less

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes Show more Show less

Posted 2 days ago

Apply

0 years

0 - 0 Lacs

Tiruchchirāppalli

On-site

GlassDoor logo

A data scientist collects and analyzes large datasets to uncover insights and create solutions that support organizational goals. They combine technical, analytical, and communication skills to interpret data and influence decision-making. Key Responsibilities: Gather data from multiple sources and prepare it for analysis. Analyze large volumes of structured and unstructured data to identify trends and patterns. Develop machine learning models and predictive algorithms to solve business problems. Use statistical techniques to validate findings and ensure accuracy. Automate processes using AI tools and programming. Create clear, engaging visualizations and reports to communicate results. Work closely with different teams to apply data-driven insights. Stay updated with the latest tools, technologies, and methods in data science. Tools and Technologies: Programming languages: Python, R, SQL. Data visualization: Tableau, Power BI, matplotlib. Machine learning frameworks: TensorFlow, Scikit-learn, PyTorch. Big data platforms: Apache Hadoop, Spark. Cloud platforms: AWS, Azure, Google Cloud. Statistical tools: SAS, SPSS. Job Type: Full-time Pay: ₹9,938.89 - ₹30,790.14 per month Schedule: Day shift Monday to Friday Morning shift Weekend availability Supplemental Pay: Performance bonus Application Question(s): Are you a immediate joiner? Location: Trichinapalli, Tamil Nadu (Preferred) Work Location: In person Application Deadline: 19/06/2025 Expected Start Date: 19/06/2025

Posted 2 days ago

Apply

1.0 years

11 - 13 Lacs

Chennai

Remote

GlassDoor logo

Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person

Posted 2 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Req ID: 321843 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Sr Java Full Stack Developer to join our team in Chennai, Tamil Nādu (IN-TN), India (IN). Lead Java Developer How You’ll Help Us: Our clients need digital solutions that will transform their business so they can succeed in today’s hypercompetitive marketplace. As a team member you will routinely deliver elite solutions to clients that will impact their products, customers, and services. Using your development, design and leadership skills and experience, you will design and implement solutions based on client needs. You will collaborate with customers on future system enhancements, thus resulting to continued engagements. How We Will Help You: Joining our Java practice is not only a job, but a chance to grow your career. We will make sure to equip you with the skills you need to produce robust applications that you can be proud of. Whether it is providing you with training on a new programming language or helping you get certified in a new technology, we will help you grow your skills so you can continue to deliver increasingly valuable work. Once You Are Here, You Will: The Lead Applications Developer provides leadership in full systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery is on time and within budget. You will direct component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements and ensure compliance. This position develops and leads AD project activities and integrations. The Lead Applications Developer guides teams to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications’ development, integration, and maintenance. The Lead Applications Developer will lead junior team members with project related activities and tasks. You will guide and influence department and project teams. This position facilitates collaboration with stakeholders. Apply Disaster Recovery Knowledge Apply Foundation Architecture Knowledge Apply Information Analysis and Solution Generation Knowledge Apply Information Systems Knowledge Apply Internal Systems Knowledge Assess Business Needs IT – Design/Develop Application Solutions IT – Knowledge of Emerging Technology IT – Process, Methods, and Tools IT – Stakeholder Relationship Management Project Risk Management Problem Management and Project Planning Technical Problem Solving and Analytical Processes Technical Writing Job Requirements: Lead IS Projects; delegate work assignments to complete the deliverables for small projects or components of larger projects to meet project plan requirements Lead System Analysis and Design; Translates business and functional requirements into technical design to meet stated business needs. Leads Design and Development of Applications; Identify new areas for process improvements to enhance performance results. Deliver application solutions to meet business and non-functional requirements. Develop and Ensure Creation of Application Documentation; determines documentation needs to deliver applications Define and Produce Integration Builds; lead build processes for target environments to create software. Verifies integration test specifications to ensure proper testing. Monitor Emerging Technology Trends; monitor the industry to gain knowledge and understanding of emerging technologies. Lead Maintenance and Support; drives problem resolution to identify, recommend, and implement process improvements. Lead other Team Members; provides input to people processes (e.g., Quality Performance Review Career Development, Training, Staffing, etc.) to provide detailed performance level information to managers. Basic qualifications: 6+ years of experience with Java, leading the development of highly scalable and resilient applications. 6+ years of experience of deep architectural experience with Spring Boot, including experience mentoring others in its best practices and advanced features. 4+ years of Angular 4+ years of GCP or similar platform such as Azure or AWS 4+ years of experience with Couchbase, including leading performance tuning, data modeling, and scalability efforts. 4+ years of experience with Kafka, AMQ, WMQ and the strategic implementation of messaging and event-driven architectures 4+ years of experience in Apache Camel, including designing and implementing complex integration solutions. 4+ years of leadership experience in adopting new technologies and frameworks, guiding best practices in development methodologies, and overseeing technical project management Ideal Mindset: Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Communicator. You know how to communicate your design ideas to both technical and nontechnical stakeholders, prioritizing critical information and leaving out extraneous details. Please note Shift Timing Requirement: 1:30pm IST -10:30 pm IST #Launchjobs #LaunchEngineering About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 2 days ago

Apply

5.0 years

15 Lacs

India

On-site

GlassDoor logo

Key Responsibilities: Architect, design, and optimize enterprise-grade NiFi data flows for large-scale ingestion, transformation, and routing. Manage Kafka clusters at scale (multi-node, multi-datacenter setups), ensuring high availability, fault tolerance, and maximum throughput. Create custom NiFi processors and develop advanced flow templates and best practices. Handle advanced Kafka configurations — partitioning, replication, producer tuning, consumer optimization, rebalancing, etc. Implement stream processing using Kafka Streams and manage Kafka Connect integrations with external systems (databases, APIs, cloud storage). Design secure pipelines with end-to-end encryption, authentication (SSL/SASL), and RBAC for both NiFi and Kafka. Proactively monitor and troubleshoot performance bottlenecks in real-time streaming environments. Collaborate with infrastructure teams for scaling, backup, and disaster recovery planning for NiFi/Kafka. Mentor junior engineers and enforce best practices for data flow and streaming architectures. Required Skills and Qualifications: 5+ years of hands-on production experience with Apache NiFi and Apache Kafka . Deep understanding of NiFi architecture (flow file repository, provenance, state management, backpressure handling). Mastery over Kafka internals — brokers, producers/consumers, Zookeeper (or KRaft mode), offsets, ISR, topic configurations. Strong experience with Kafka Connect , Kafka Streams , Schema Registry , and data serialization formats (Avro, Protobuf, JSON). Expertise in tuning NiFi and Kafka for ultra-low latency and high throughput . Strong scripting and automation skills (Shell, Python, Groovy, etc.). Experience with monitoring tools : Prometheus, Grafana, Confluent Control Center, NiFi Registry, NiFi Monitoring dashboards. Solid knowledge of security best practices in data streaming (encryption, access control, secret management). Hands-on experience deploying on cloud platforms (AWS MSK, Azure Event Hubs, GCP Pub/Sub with Kafka connectors). Bachelor's or Master’s degree in Computer Science, Data Engineering, or equivalent field. Preferred (Bonus) Skills: Experience with containerization and orchestration (Docker, Kubernetes, Helm). Knowledge of stream processing frameworks like Apache Flink or Spark Streaming. Contributions to open-source NiFi/Kafka projects (a huge plus!). Soft Skills: Analytical thinker with exceptional troubleshooting skills. Ability to architect solutions under tight deadlines. Leadership qualities for guiding and mentoring engineering teams. Excellent communication and documentation skills. pls send your resume on hr@rrmgt.in or call me on 9081819473. Job Type: Full-time Pay: From ₹1,500,000.00 per year Work Location: In person

Posted 2 days ago

Apply

Exploring Apache Jobs in India

Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.

Average Salary Range

The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect

Related Skills

Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing

Interview Questions

  • What is Apache HTTP Server and how does it differ from Apache Tomcat? (medium)
  • Explain the difference between Apache Hadoop and Apache Spark. (medium)
  • What is mod_rewrite in Apache and how is it used? (medium)
  • How do you troubleshoot common Apache server errors? (medium)
  • What is the purpose of .htaccess file in Apache? (basic)
  • Explain the role of Apache Kafka in real-time data processing. (medium)
  • How do you secure an Apache web server? (medium)
  • What is the significance of Apache Maven in software development? (basic)
  • Explain the concept of virtual hosts in Apache. (basic)
  • How do you optimize Apache web server performance? (medium)
  • Describe the functionality of Apache Solr. (medium)
  • What is the purpose of Apache Camel? (medium)
  • How do you monitor Apache server logs? (medium)
  • Explain the role of Apache ZooKeeper in distributed applications. (advanced)
  • How do you configure SSL/TLS on an Apache web server? (medium)
  • Discuss the advantages of using Apache Cassandra for data management. (medium)
  • What is the Apache Lucene library used for? (basic)
  • How do you handle high traffic on an Apache server? (medium)
  • Explain the concept of .htpasswd in Apache. (basic)
  • What is the role of Apache Thrift in software development? (advanced)
  • How do you troubleshoot Apache server performance issues? (medium)
  • Discuss the importance of Apache Flume in data ingestion. (medium)
  • What is the significance of Apache Storm in real-time data processing? (medium)
  • How do you deploy applications on Apache Tomcat? (medium)
  • Explain the concept of .htaccess directives in Apache. (basic)

Conclusion

As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies