Home
Jobs

5940 Apache Jobs - Page 11

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Senior Automation Engineer Job Type : Full-time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a detail-oriented and innovative Senior Automation Engineer to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you. Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes. Create detailed and effective test plans and test cases based on technical requirements and business specifications. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage. Document test cases, results, and identified defects; communicate findings clearly to the team. Conduct performance testing to ensure data processing and retrieval meet established benchmarks. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation. Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions. Hands-on experience with Databricks, data warehouse, and data lake architectures. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred). Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences. Bachelor’s degree in Computer Science, Information Technology, or a related discipline. Demonstrated problem-solving skills and a collaborative approach to teamwork. Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications. Ability to integrate user-facing elements with server-side logic for seamless data experiences. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies. Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary: We are seeking a highly skilled Lead Data Engineer/Associate Architect to lead the design, implementation, and optimization of scalable data architectures. The ideal candidate will have a deep understanding of data modeling, ETL processes, cloud data solutions, and big data technologies. You will work closely with cross-functional teams to build robust, high-performance data pipelines and infrastructure to enable data-driven decision-making. Experience: 7 - 12 years Work Location: Hyderabad (Hybrid) / Remote Mandatory skills: AWS, Python, SQL, Airflow, DBT Must have done 1 or 2 projects in Clinical Domain/Clinical Industry. Responsibilities: Design and Develop scalable and resilient data architectures that support business needs, analytics, and AI/ML workloads. Data Pipeline Development: Design and implement robust ETL/ELT processes to ensure efficient data ingestion, transformation, and storage. Big Data & Cloud Solutions: Architect data solutions using cloud platforms like AWS, Azure, or GCP, leveraging services such as Snowflake, Redshift, BigQuery, and Databricks. Database Optimization: Ensure performance tuning, indexing strategies, and query optimization for relational and NoSQL databases. Data Governance & Security: Implement best practices for data quality, metadata management, compliance (GDPR, CCPA), and security. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to translate business requirements into scalable solutions. Technology Evaluation: Stay updated with emerging trends, assess new tools and frameworks, and drive innovation in data engineering. Required Skills: Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 7 - 12+ years of experience in data engineering Cloud Platforms: Strong expertise in AWS data services. Databases: Hands-on experience with SQL, NoSQL, and columnar databases such as PostgreSQL, MongoDB, Cassandra, and Snowflake. Programming: Proficiency in Python, Scala, or Java for data processing and automation. ETL Tools: Experience with tools like Apache Airflow, Talend, DBT, or Informatica. Machine Learning & AI Integration (Preferred): Understanding of how to architect data solutions for AI/ML applications Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are looking for a highly skilled and motivated Software Engineer with a strong background in implementing PriceFX solutions and deep expertise in Groovy scripting, Java, JavaScript, and Apache Camel . In this role, you will be responsible for delivering scalable pricing and integration solutions, contributing to digital transformation initiatives for global enterprises. Job Location – Hyderabad, Ahmedabad, and Indore India. Key Responsibilities Support the technical implementation of PriceFX modules including QuoteConfigurator, PriceBuilder, RebateManager, and others. Collaborate with senior engineers and business analysts to gather requirements and implement solutions Write and maintain Groovy scripts to implement custom business logic and calculations within PriceFX. Develop, and maintain integrations using Apache Camel, REST APIs, and other middleware tools. Develop backend components and services using Java and frontend elements using JavaScript when required. Create and maintain technical documentation, best practices, and reusable assets. What You’ll Bring: Bachelor’s degree in computer science, Engineering, or a related field, or equivalent work experience. Mandatory Skills 2+ years of experience in PriceFX implementation. Proficient in Groovy scripting and PriceFX calculation logic setup. Hands-on experience in Java (Spring Boot preferred). Experience with Apache Camel for integration flows and routing. Solid understanding of JavaScript for light UI customization or scripting needs. Familiarity with RESTful APIs, JSON, XML, and third-party system integrations. Good understanding of pricing processes and enterprise software implementation. Strong problem-solving skills and attention to detail. Excellent communication and documentation skills. Preferred Skills (Nice to Have): Experience working with cloud platforms (AWS, Azure, or GCP). Exposure to CI/CD, Git, and containerization (Docker/Kubernetes). Background in enterprise pricing, CPQ, or revenue management platforms. Experience in Agile/Scrum development environments. Show more Show less

Posted 2 days ago

Apply

4.0 - 5.11 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Position Title: Sr. Java Developer Web Openings Experience Level 4 to 5.11 Years - 1 Developer 6 to 7.11 Years - 1 Developer The Position is at BFSI Domain Client based out of Kanjurmarg (Mumbai). The Client is a market leader in its Domain. Selected Candidates with be working on Cutting edge Technologies as Client is looking for Dynamic, Hardworking Committed Candidates. Qualification B.E/ B. Tech /MTech /MCA. Key Responsibilities Developing, releasing, and supporting java based multi-tier robust web Application and standalone systems. Deliver across the entire app life cycle, design, build, deploy, test, release and support. Working directly with developers and product managers to conceptualize, build, test and realise products. Work on bug fixing and improving application performance in coordination with QA team Continuously discover, evaluate, and implement new technologies to maximize development efficiency Optimizing performance for the apps and keep up to date on the latest industry trends in the emerging technologies. Required Skills Candidate should have experience in developing applications using JAVA/J2EE programming skills with sound understanding of Java 8-17. Strong proficiency in back-end language (Java), Java frameworks (Spring Boot, Spring MVC) and JavaScript frameworks (Angular, AngularJS), Kafka. Strong JS skills on jQuery, HTML and CSS, Strong understanding and experience with Microservices Experience working with RDBMS concepts, SQL syntaxes and complex query processing and optimization (e.g. PostgreSQL, Oracle) Exposure to handling and configuring Web servers (e.g. Apache) and UI/UX design. Strong understanding of object-oriented programming (OOP) concepts and design patterns Experience in web services and clear understanding of RESTful APIs to connect to back-end services. Excellent problem-solving skills, with the ability to debug and troubleshoot code issues Strong communication and teamwork skills, with the ability to work collaboratively with cross functional team Selection Procedure Face to Face round of interview at Greysoft office. Virtual round of interview by Client. Machine Test. (Client Location) Joining Period Immediate to 15 days. Interested candidate can email their updated resume on recruiter@greysoft.in This job is provided by Shine.com Show more Show less

Posted 2 days ago

Apply

4.0 - 5.11 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Position Title: Sr. Java Developer Core Experience Level 4 to 5.11 Years 1 6 to 7.11 Years 1 The Position is at BFSI Domain Client based out of Kanjurmarg (Mumbai). The Client is a market leader in its Domain. Selected Candidates with be working on Cutting edge Technologies as Client is looking for Dynamic, Hardworking Committed Candidates. Qualification B.E/ B. Tech /MTech /MCA. Key Responsibilities Conceptualizing, Developing, releasing, and supporting java based multi-tier robust Web Application and standalone systems. Deliver across the entire app life cycle, design, build, deploy, test, release and support. Optimizing performance for the Systems and Continuously discover, evaluate, and implement emerging technologies to maximize development efficiency Working directly with developers and product managers to conceptualize, build, test and realise products. Work on bug fixing and improving application performance in coordination with QA team. Required Skills Strong knowledge of Java 8 -17 including Collections framework and data structures, multithreading and concurrency management, memory management, Kafka, request queuing, NIO, IO, TCP/IP, file system. Candidate should have experience in developing applications using JAVA/J2EE programming skills preferably with Real Time Response Systems. Strong proficiency in back-end language (Java), Java frameworks (Spring Boot, Spring MVC) Strong understanding and experience with Microservices Experience working with RDBMS concepts, SQL syntaxes and complex query processing and optimization (e.g. PostgreSQL, Oracle), in memory databases such as Redis, Memcache. Exposure to handling and configuring Web servers (e.g. Apache) and UI/UX design. Strong understanding of object-oriented programming (OOP) concepts and design patterns Excellent problem-solving skills, with the ability to debug and troubleshoot code issues Strong communication and teamwork skills, with the ability to work collaboratively with cross functional teams. Selection Procedure Face to Face round of interview at Greysoft office. Virtual round of interview by Client. Machine Test. (Client Location) Joining Period Immediate to 15 days. Interested candidate can email their updated resume on recruiter@greysoft.in This job is provided by Shine.com Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Experience Range - 10 yrs Location : Pune/Remote Qualification : BE/BTech/MCA/MTech (Preferably CS/IT) Technical Skills Required Mandatory: Good knowledge in Networking and troubleshooting tools- DNS, DHCP, TLS, SSL, security Protocols, Routing, Packet data analysing, Prior experience in working with Wireshark, Nmap, http analyser, Debug view etc. Knowledge in VAPT analysis & Security knowledge about security software such as DLP, firewalls (End point security are add on) Product and Application Support Good experience in product and application support with sound knowledge of networking and IT Infrastructure Must have worked on supporting any enterprise security applications like zero trust, Identity Management solution, Multifactor Authentication Solution Any support experience in Virtualization products coming from Citrix, Microsoft, Dell, etc. Should have worked with any reverse proxy solutions Should understand how key web servers can be troubleshooted like Apache, NGINX, TOMCAT, IIS, etc. OWASP Application Security Guidelines How typically big enterprise support product installation and upgrades are managed and how the patch management is done Knowledge of Power-shell scripting, Linux shell scripting, and Python Infra Support Excellent knowledge in Windows Server operating systems & Roles - Active directory, Group policies, Remote Desktop services, IIS, FSMO roles. Process data analyzing, Windows sys- internals tools knowledge will be add on. Batch and PowerShell scripting will be desirable Work experience in Client-side operating systems - Windows 7,8,10 are must Very good Working knowledge in Linux & Mac operating systems Support Management and Tools knowledge Good knowledge of L1 and L2 Ticket tracking tools Good Knowledge of Service level management tools Should be able to manage escalations and the agreed and provided SLA for various clients Should be able to provide reports for any escalations, Root cause Analysis (RCA) , Productivity reports Must make sure escalations are managed at root level and there is zero repeat escalations Excellent knowledge on Server Operating systems (Win 2016/19/22, Linux flavors) Proficient in Networking - DNS, DHCP, basic routing concepts, network monitoring commands & tools, Good knowledge in IT Infrastructure & Security concepts -Storage, File servers, SSL certificates, VPNs gateways, VAPT analysis, UTMS etc. Good knowledge in Azure Cloud, conceptual understanding in Desktop as service, working experience in Azure Virtual Desktop / equivalent products Role and Responsibilities: To provide solutions, not workarounds Good listener to customer, provide on time deliveries; Involve appropriate authorities when escalations are required Make sure Support deliveries are under SLAs Provide Solution documents, KB articles & RCAs and make sure team members are following the process Proactively involve in escalations and make sure customer commitments are met Coordinate with Product Management team for bug fixes, new feature escalations & development related items and make sure on time resolution Good with Statistical data, analyze priorities and involve in the product improvement discussions work as a leader of special or Ongoing requirements Use appropriate judgement during critical environments. Reproduce customer issues and if required, analyze the root cause; Check and verify any viable solutions available other than development – such as creating scripts, simple solutions etc. Good to have: Knowledge of Windows kernel Drivers Kubernetes and Container technologies Prior experience in support ticketing tools and process Experience in documentations Certifications - ITIL3 or ITIL4 Soft Skills Required Strong communication skills (written and Verbal) Clarity of thought User centric approach Sincere Proactive Self-motivated Logical bent of mind (Analytical) Team Manager Flexible/adaptable Strong verbal communication skills Accops empowers modern enterprises with agility, flexibility, and affordability by providing secure and instant remote access to business applications from any device and network. Founded in October 2012, Accops is headquartered in Pune, India, and is known for its nimble and customizable approach, offering faster response times to dynamic environments. We are a rapidly growing IT product company with a flat organizational structure and flexible work environment. We enable enterprises to adopt 'work from anywhere' and by joining us, you get to work on hypergrowth technologies, like virtualization, cloud computing and network security. 𝘈𝘤𝘤𝘰𝘱𝘴 𝘪𝘴 𝘢𝘯 𝘦𝘲𝘶𝘢𝘭 𝘰𝘱𝘱𝘰𝘳𝘵𝘶𝘯𝘪𝘵𝘺 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳 𝘤𝘰𝘮𝘮𝘪𝘵𝘵𝘦𝘥 𝘵𝘰 𝘣𝘶𝘪𝘭𝘥𝘪𝘯𝘨 𝘢 𝘤𝘶𝘭𝘵𝘶𝘳𝘦 𝘸𝘩𝘦𝘳𝘦 𝘢𝘭𝘭 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘦𝘴 𝘢𝘳𝘦 𝘷𝘢𝘭𝘶𝘦𝘥, 𝘳𝘦𝘴𝘱𝘦𝘤𝘵𝘦𝘥 𝘢𝘯𝘥 𝘰𝘱𝘪𝘯𝘪𝘰𝘯𝘴 𝘤𝘰𝘶𝘯𝘵. 𝘞𝘦 𝘦𝘯𝘤𝘰𝘶𝘳𝘢𝘨𝘦 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴 𝘧𝘳𝘰𝘮 𝘢𝘭𝘭 𝘴𝘶𝘪𝘵𝘢𝘣𝘭𝘺 𝘲𝘶𝘢𝘭𝘪𝘧𝘪𝘦𝘥 𝘱𝘦𝘳𝘴𝘰𝘯𝘴 𝘪𝘳𝘳𝘦𝘴𝘱𝘦𝘤𝘵𝘪𝘷𝘦 𝘰𝘧, 𝘣𝘶𝘵 𝘯𝘰𝘵 𝘭𝘪𝘮𝘪𝘵𝘦𝘥 𝘵𝘰, 𝘵𝘩𝘦𝘪𝘳 𝘨𝘦𝘯𝘥𝘦𝘳 𝘰𝘳 𝘨𝘦𝘯𝘦𝘵𝘪𝘤 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯, 𝘴𝘦𝘹𝘶𝘢𝘭 𝘰𝘳𝘪𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯, 𝘦𝘵𝘩𝘯𝘪𝘤𝘪𝘵𝘺, 𝘳𝘦𝘭𝘪𝘨𝘪𝘰𝘯, 𝘴𝘰𝘤𝘪𝘢𝘭 𝘴𝘵𝘢𝘵𝘶𝘴, 𝘮𝘦𝘥𝘪𝘤𝘢𝘭 𝘤𝘢𝘳𝘦 𝘭𝘦𝘢𝘷𝘦 𝘳𝘦𝘲𝘶𝘪𝘳𝘦𝘮𝘦𝘯𝘵𝘴, 𝘱𝘰𝘭𝘪𝘵𝘪𝘤𝘢𝘭 𝘢𝘧𝘧𝘪𝘭𝘪𝘢𝘵𝘪𝘰𝘯, 𝘱𝘦𝘰𝘱𝘭𝘦 𝘸𝘪𝘵𝘩 𝘥𝘪𝘴𝘢𝘣𝘪𝘭𝘪𝘵𝘪𝘦𝘴, 𝘤𝘰𝘭𝘰𝘳, 𝘯𝘢𝘵𝘪𝘰𝘯𝘢𝘭 𝘰𝘳𝘪𝘨𝘪𝘯, 𝘷𝘦𝘵𝘦𝘳𝘢𝘯 𝘴𝘵𝘢𝘵𝘶𝘴, 𝘦𝘵𝘤. 𝘞𝘦 𝘤𝘰𝘯𝘴𝘪𝘥𝘦𝘳 𝘢𝘭𝘭 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴 𝘣𝘢𝘴𝘦𝘥 𝘰𝘯 𝘮𝘦𝘳𝘪𝘵 𝘢𝘯𝘥 𝘴𝘶𝘪𝘵𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘵𝘰 𝘵𝘩𝘦 𝘳𝘰𝘭𝘦. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Job Title: Cloud Platform Engineer / Senior Engineer / Architect (Full-Time or Contract-based – Flexible Engagement) Location: Gurugram / Remote (Hybrid models considered) About Fluidech Fluidech, an Esconet group company and a deemed public company, is a technology consulting and managed services firm specialising in cybersecurity. Founded in 2014 and headquartered in Gurugram, and today with a client base spanning over 100 organizations worldwide, Fluidech designs IT solutions aligned with business objectives, fostering trusted relationships and delivering measurable performance improvements. Established as a born-in-the-cloud company, Fluidech has evolved into a trusted technology partner that helps businesses build (Cloud & Infrastructure), automate (DevOps), and secure (Cyber Security services). Our solutions span diverse industry verticals, aligned with each client’s business goals. In addition to holding ISO 9001 and ISO 27001 certifications, and an award-winning cybersecurity team, the company has a strong value proposition in its GRC services across frameworks including but not limited to NCIIPC's CAF, SEBI's CSCRF, and others. Role Overview We are looking for a highly skilled Cloud Platform Engineer / Architect to help Fluidech design and build a secure, scalable, and efficient Private Cloud platform , using open-source cloud infrastructure platforms such as OpenNebula, Apache CloudStack , or similar. This individual will play a hands-on technical leadership role in architecting and deploying our internal cloud ecosystem — enabling compute, storage, and network virtualization, automation, self-service, and orchestration features. Key Responsibilities Design, architect, and implement a private cloud platform from the ground up. Evaluate and choose appropriate cloud stack technologies (OpenNebula, Apache CloudStack, Proxmox, etc.). Build and maintain cloud orchestration, provisioning, and resource management systems. Integrate storage, networking, compute resources across virtualized infrastructure. Define and implement cloud security standards and access controls. Collaborate with DevOps, Infrastructure, and Security teams to align the platform with operational needs. Develop self-service portals and automation pipelines for provisioning workloads. Monitor system performance, reliability, and scalability; proactively identify and address issues. Document the entire architecture and design, with handover or operationalization steps. Required Skills & Experience Proven experience in building and managing private cloud platforms using OpenNebula, Apache CloudStack, or equivalent. Strong expertise in virtualization platforms (KVM, Xen, VMware, etc.). Solid understanding of networking , storage technologies , and hypervisor management . Experience with Linux system administration , shell scripting , and infrastructure automation (e.g., Ansible, Terraform). Familiarity with cloud orchestration , multi-tenancy , resource quotas , and metering . Knowledge of cloud security principles, access control, and monitoring. Exposure to containers , Kubernetes , or hybrid cloud environments is a plus. Ability to evaluate trade-offs between open-source and commercial solutions. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. Relevant certifications (e.g., OpenNebula Certified Professional, Apache CloudStack certifications, Linux Foundation Certified SysAdmin, RHCSA/RHCE). Engagement Flexibility We are open to both full-time employment or contract-based engagement, depending on the candidate’s availability and interest. Remote work options available, but must be accessible for collaboration with internal teams during business hours. Why Join Fluidech Work on cutting-edge cloud and cybersecurity solutions. Opportunity to architect core technology platforms from the ground up. Flexible working models and high ownership culture. Collaborate with a fast-growing, award-winning technology team. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Key Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Our Mission At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are. Who We Are We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included. As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few! At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision. Job Description Your Career The Engineering TAC (ETAC) Advanced Solutions team is an exciting crossroads between Technical Assistance Center (TAC) and Engineering. This team is uniquely empowered to drive decisions and to be the thought leaders within the Global Customer Support organization at Palo Alto Networks. We are a relatively small, global team consisting of top performers with support, engineering and development backgrounds. Our roles are very hands-on and have a high impact on the company. The Advanced Solutions team role also includes building/architecting robust environments to assist with complex issue reproduction/resolution, as well as large-scale, cross-platform lab buildouts for feature testing, software release, etc… Our Team consists of Engineers who are experienced in Network Engineering, NetSec, QA, Software Development/DevOps, Cloud, as well as SME’s in areas for bleeding edge tools such as Ixia/Keysight, Spirent, etc… Team's Mission includes Application and Tools Development, AI/Machine Learning R&D, DB Systems Administration, Release Management, and Data Analytics. You will network and collaborate with key stakeholders within Global Support, Engineering, QA, PM, Sales, and more, leveraging your capability of detailing difficult technical issues to both non-technical and technical professionals. Your Impact An ETAC engineer has the highest level of expertise amongst support teams, and is responsible for staying up to date with technical details on Palo Alto Networks new products and industry in general Work with TAC to provide expert-level technical support of customer issues that involve very complex network topologies, architectures, and security designs Lead technical discussions with cross-functional teams, fostering an environment of transparency that ultimately leads to better products. Develop advanced troubleshooting focused tools and scripts to help solve complex customer issues and improve product supportability Help drive and enable ML/AI related projects Own critical and executive level issues partnering primarily with Customer Support and Engineering to provide expertise in identifying and resolving customer issues, which entails working with the TAC case owner and Engineering on a replication or verification and communicating updates Lead in Identifying problems and taking actions to fix them across support and product life cycles Develop and deliver expert level training materials for TAC support, Engineering, and Professional Services teams Ownership of Release Management: Assist with managing the end-to-end release process, including coordinating with various teams to gather release requirements and dependencies. Responsible for scheduling, planning, and controlling the software delivery process for on-prem and cloud products (CSP/Adminsite/AWS/Azure/OCI/GCP) Coordinate with IT/Dev/QA to assure IT requirements are met for a seamless release process SW release after completing testing/deployment stage Define strategic usage of release management tools (Autoex/Jenkins/Automation Staging Scripts) Collaborate on product development with cross-functional teams including Engineering/QA/PM Triage Production issues impacting customer deliverables on Palo Alto Networks Support Portal Qualifications Your Experience Minimum of 7 years of professional experience Technical Support or Development experience supporting enterprise customers with very complex LAN/WAN environments Deep understanding of TCP/IP and advanced knowledge of LAN/WAN technologies, expertise with general routing/switching, Routing protocols (e.g. BGP, OSPF, Multicast), branch and DataCenter Architectures Expertise with Remote Access VPN solutions, IPSEC, PKI & SSL Expertise with Cloud services and Infrastructure a plus Familiarity with C, Python, or at least one scripting language - While this is not a role to be a developer, one should have some experience in automating moderately complex tasks. Experience with Palo Alto Networks products is highly desired Understand how data packets get processed - Devices shouldn’t be a “black box”, one should have an understanding of packet processing at various stages and how that can result in different symptoms/outcomes. Excellent communication skills with the ability to deliver highly technical informative presentations - While you will not be involved with taking call from a queue, there may be cases where your expertise is called upon to speak with customers from time to time, along with Support members, Developers, Sales Engineers and the rest of your team Proficiency in creating technical documentation using applications such as Powerpoint/Google Slides or knowledge-base/intranet platforms such as Lumapps, Jive or Confluence Familiar with automation with Jenkins, Terraform, etc. Understanding of Linux operating systems. Able to operate headless Linux systems and Shell scripting. Basic knowledge of deploying and configuring web servers, i.e Nginx, Apache, IIS. Understanding of load balancing technologies and HTTP forwarding with Nginx, HaProxy, and load balancers provided by AWS, Azure, and Google Cloud. Familiarity with virtualization technologies including VMware, KVM, OpenStack, AWS, Google Cloud and Azure. Familiarity with Docker. Able to create, manage, and deploy docker images on Docker server. Manage running containers. Create docker-compose YAML files. Familiar with front-end technologies including JavaScript, React, HTML, and CSS for building responsive, user-friendly interfaces. Experienced in back-end development using frameworks such as Python and Flask Brings a creative and hands-on approach to testing and enhancing small applications, participating in all aspects of the testing lifecycle—from functional and performance testing to idea generation and continuous monitoring—with a focus on improvement and efficacy to ensure optimal quality and user satisfaction. Willing to work flexible times including occasional weekends and evenings. Additional Information The Team Our technical support team is critical to our success and mission. As part of this team, you enable customer success by providing support to clients after they have purchased our products. Our dedication to our customers doesn’t stop once they sign – it evolves. As threats and technology change, we stay in step to accomplish our mission. You’ll be involved in implementing new products, transitioning from old products to new, and will fix integrations and critical issues as they are raised – in fact, you’ll seek them out to ensure our clients are safely supported. We fix and identify technical problems, with a pointed focus of providing the best customer support in the industry. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

India

Remote

Linkedin logo

🌟 We’re Hiring: Customer Service Representatives & Support Managers 📍 Location : Remote 🕒 Employment Type : Contract-based / Freelance / Part-time – 1 Month 📅 Start Date : [Immediate] Are you passionate about delivering exceptional customer experiences and driving support excellence? Join our fast-paced, customer-obsessed team where you’ll play a critical role in shaping how we support users across multiple channels and platforms. 🔧 Key Responsibilities Respond to and resolve multichannel support tickets (email, chat, voice, social, etc.) Monitor and report key support KPIs and metrics (e.g., CSAT, FRT, ART, etc.) Update and maintain internal knowledge bases and help center documentation Handle customer escalations with professionalism and urgency Coach, mentor, and lead junior support agents to consistently meet quality standards Identify and implement process improvements to increase efficiency and customer satisfaction Collaborate with cross-functional teams (product, sales, QA) to relay customer insights 💻 Tools & Platforms You’ll Work With Commercial Support & CX Platforms: Zendesk, Freshdesk, Salesforce Service Cloud, ServiceNow HubSpot Service Hub, Intercom, Helpscout NICE IEX, Verint, Assembled RingCentral, Nextiva Tableau, Qualtrics, SurveyMonkey Slack, Microsoft Teams Open Source / Free Tools: Ticketing: osTicket, Zammad, Request Tracker, UVDesk, FreeScout Messaging: Rocket.Chat, Mattermost, Element, Jitsi Meet Documentation: DokuWiki, BookStack, MediaWiki, Outline Reporting & Analytics: Metabase, Apache Superset, Google Data Studio (free) Survey & Feedback: Google Forms, LimeSurvey ✅ What We’re Looking For 2+ years of experience in customer support or service delivery roles Strong verbal and written communication skills Proven ability to manage and resolve complex customer issues Familiarity with support automation, AI/chatbots, or workflow optimization is a plus Experience with both enterprise and open-source tools is an advantage Leadership or team coaching experience (for Support Manager applicants) Interested Please share your Profiles to Ganapathikumar@highbrowtechnology.com Show more Show less

Posted 2 days ago

Apply

0.0 years

0 Lacs

Tiruchchirappalli, Tamil Nadu

On-site

Indeed logo

A data scientist collects and analyzes large datasets to uncover insights and create solutions that support organizational goals. They combine technical, analytical, and communication skills to interpret data and influence decision-making. Key Responsibilities: Gather data from multiple sources and prepare it for analysis. Analyze large volumes of structured and unstructured data to identify trends and patterns. Develop machine learning models and predictive algorithms to solve business problems. Use statistical techniques to validate findings and ensure accuracy. Automate processes using AI tools and programming. Create clear, engaging visualizations and reports to communicate results. Work closely with different teams to apply data-driven insights. Stay updated with the latest tools, technologies, and methods in data science. Tools and Technologies: Programming languages: Python, R, SQL. Data visualization: Tableau, Power BI, matplotlib. Machine learning frameworks: TensorFlow, Scikit-learn, PyTorch. Big data platforms: Apache Hadoop, Spark. Cloud platforms: AWS, Azure, Google Cloud. Statistical tools: SAS, SPSS. Job Type: Full-time Pay: ₹9,938.89 - ₹30,790.14 per month Schedule: Day shift Monday to Friday Morning shift Weekend availability Supplemental Pay: Performance bonus Application Question(s): Are you a immediate joiner? Location: Trichinapalli, Tamil Nadu (Preferred) Work Location: In person Application Deadline: 19/06/2025 Expected Start Date: 19/06/2025

Posted 2 days ago

Apply

0 years

0 Lacs

Mulshi, Maharashtra, India

On-site

Linkedin logo

Area(s) of responsibility Data Management (AWS) developer We are looking for a Data Management (AWS) developer who will serve as the technical counterpart to data stewards across various business domains. This role will focus on the technical aspects of data management, including the integration of data catalogs, data quality management, and access management frameworks within our data lakehouse. Key Responsibilities Integrate Acryl data catalog with AWS Glue data catalog to enhance data discoverability and management. Develop frameworks and processes for deploying and maintaining data classification and data quality rules in the data lakehouse. Implement and maintain Lake Formation access frameworks, including OpenID Connect (OIDC) for secure data access. Build and maintain data quality and classification reports and visualizations to support data-driven decision-making. Develop and implement mechanisms for column-level data lineage in the data lakehouse. Collaborate with data stewards to ensure effective data ownership, cataloging, and metadata management. Qualifications Relevant experience in data management, data governance, or related technical fields. Strong technical expertise in AWS services, particularly in AWS Glue, Lake Formation, and data quality management tools. Familiarity with data security practices, including OIDC and AWS IAM. Experience with AWS Athena, Apache Airflow. Relevant certifications (e.g., CDMP) are a plus. Terraform, Github, Python, Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 2 days ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role Expectations: Design, develop, and execute automated tests to ensure product quality in digital transformation initiatives. Collaborate with developers and business stakeholders to understand project requirements and define test strategies. Implement API testing using Mockito , Wiremock , and Stubs for effective validation of integrations. Utilize Kafka and MQ to test and monitor real-time data streaming scenarios. Perform automation testing using RestAssured, Selenium , and TestNG to ensure smooth delivery of applications. Leverage Splunk and AppDynamics for real-time monitoring, identifying bottlenecks, and diagnosing application issues. Create and maintain continuous integration/continuous deployment ( CI/CD ) pipelines using Gradle and Docker . Conduct performance testing using tools like Gatling and Jmeter to evaluate application performance and scalability. Participate in Test Management and Defect Management processes to track progress and issues effectively. Work closely with onshore teams and provide insights to enhance test coverage and overall quality. Qualifications: 9+ years of relevant experience in QA automation and Java Programming: Strong experience with Java 8 and above, including a deep understanding of the Streams API . Frameworks: Proficiency in SpringBoot and JUnit for developing and testing robust applications. API Testing: Advanced knowledge of RestAssured and Selenium for API and UI automation. Candidates must demonstrate hands-on expertise. CI/CD Tools: Solid understanding of Jenkins for continuous integration and deployment. Cloud Platforms: Working knowledge of AWS for cloud testing and deployment. Monitoring Tools: Familiarity with Splunk and AppDynamics for performance monitoring and troubleshooting. Defect Management: Practical experience with test management tools and defect tracking. Build & Deployment: Experience with Gradle for build automation and Docker for application containerization. SQL: Strong proficiency in SQL, including query writing and database operations for validating test results. Domain Knowledge: Prior experience in the Payments domain with a good understanding of the domain-specific workflows. Nice to Have: Data Streaming Tools: experience with Kafka (including basic queries and architecture) OR MQ for data streaming testing. Financial services or payments domain experience will be preferred. Frameworks: Experience with Apache Camel for message-based application integration. Performance Testing: Experience with Gatling and Jmeter for conducting load and performance testing. Show more Show less

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Apache Spark . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 2 days ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Work experience: 3-6 years Budget is 7 Lac Max Notice period: Immediate to 30days. Linux Ø Install, configure, and maintain Linux servers (Red Hat, CentOS, Ubuntu, Amazon Linux). Ø Linux OS through Network and Kick Start Installation Ø Manage system updates, patch management, kernel upgrades. Ø Create and manage user accounts, file systems, permissions, and storage. Ø Write shell scripts (Bash, Python) for task automation. Ø Monitor server performance and troubleshoot hardware/software issues. Ø Handle incident management, root cause analysis, and preventive maintenance. Ø Implement and manage backup solutions (rsync, cron jobs, snapshot backups). Ø Harden servers by configuring firewalls (iptables, firewalld), securing SSH, and managing SELinux. Ø Configure and troubleshoot networking services (DNS, DHCP, FTP, HTTP, NFS, Samba). Ø Work on virtualization and cloud technologies (AWS EC2, VPC, S3, RDS basics if required). Ø Maintain detailed documentation of system configuration and procedures. Ø Implement and configure APACHE & Tomcat web server with open SSL on Linux. Ø SWAP Space Management. Ø LVM (extending, reducing, removing and merging), Backup and Restoration. Amazon Web Services Ø AWS Infrastructure Management : Provision and manage cloud resources like EC2, S3, RDS, VPC, IAM, EKS, Lambda. Ø Cloud Architecture : Design and implement secure, scalable, and reliable cloud solutions. Ø Automation and IaC : Automate deployments using tools like Terraform, CloudFormation, or AWS CDK. Ø Security Management : Configure IAM roles, security groups, encryption (KMS), and enforce best security practices. Ø Monitoring and Optimization : Monitor cloud resources with CloudWatch, X-Ray, and optimize for cost and performance. Ø Backup and Disaster Recovery : Set up data backups (S3, Glacier, EBS snapshots) and design DR strategies. Ø CI/CD Implementation : Build and maintain CI/CD pipelines using AWS services (CodePipeline, CodeBuild) or Jenkins, GitLab,GitHub. Ø Networking : Manage VPCs, Subnets, Internet Gateways, NAT, VPNs, Route53 DNS configurations. Ø Troubleshooting and Support : Identify and fix cloud resource issues, perform root cause analysis. Ø Migration Projects : Migrate on-premises servers, databases, and applications to AWS. Windows Server and Azure: Ø Active Directory: Implementation, Migration, Managing and troubleshooting. Ø Deep knowledge on DHCP Server Ø Deep knowledge in Patch management Ø Troubleshooting Windows operating System Ø Decent knowledge in Azure (Creation of VMs, configuring network rules, Migration, Managing and troubleshooting) Ø Deep knowledge in VMware ESXi (Upgrading the server firmware, creation of VMs, Managing backups, monitoring etc) Networking: Ø Knowledge on IP Addressing, NAT, P2P protocols, SSL and IPsec VPNS etc Ø Deep knowledge in VPN Ø Knowledge in MVoIP, VMs, SIP PRI and Lease Line. Ø Monitoring the Network bandwidth and maintaining the stability Ø Configuring Switch and Routers Ø Troubleshooting Network Devices Ø Must be able to work on Cisco Meraki Access Point devices Firewall & Endpoint Security: Ø Decent knowledge in Fortinet Firewalls which includes creating Objects, Routing, creating Rules and monitoring etc. Ø Decent knowledge in CrowdStrike Ø Knowledge in Vulnerability and assessment Office365 Ø Deep knowledge in Office365 (Creation of mail, Backup and archive, Security rules, Security Filters, Creation of Distribution list etc) Ø Knowledge in MX, TX and other records Ø Deep knowledge in Office365 Apps like Teams, Outlook, Excel etc Ø SharePoint management Other Tasks: Ø Hardware Servicing Laptops and desktops Ø Maintaining Asset inventory up to date. Ø Managing the utility invoices. Ø Handling L1 and L2 troubleshooting Ø Vendor Management Ø Handling application related issues Ø Website hosting and monitoring Ø Tracking all Software licenses, Cloud Service renewal period and ensue they are renewed on time. Ø Monitoring, managing and troubleshooting servers. Ø Knowledge in NAS Ø Knowledge in EndPoint Central tool and Ticketing tool. Show more Show less

Posted 2 days ago

Apply

0.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

We're Hiring: GCP DevOps Engineer (with Node.js Skills) Locations: Bengaluru / Chennai / Pune / Hyderabad / Vadodara (On-site/Hybrid as per role) Positions Available: 3 Employment Type: Full-time Salary: ₹10–14 LPA (Based on experience and interview performance) About the Role: We are looking for passionate and curious GCP DevOps Engineers who are comfortable working in dynamic environments and love combining DevOps best practices with backend development. If you have 1–3 years of hands-on experience, basic knowledge of Node.js , and a solid grip on GCP, Kubernetes, and Git , this could be the perfect role to elevate your career. What You’ll Be Doing: Deploy, manage, and monitor cloud infrastructure on Google Cloud Platform (GCP) Work with Kubernetes to orchestrate containerized applications Collaborate with developers to integrate Node.js -based services and APIs Handle Kafka messaging pipelines (consumers & producers) Manage PostgreSQL databases (schema design, queries, performance tuning) Utilize Git and GitHub for version control, code reviews, and CI workflows Use VS Code or similar IDEs for development and troubleshooting Troubleshoot issues independently and ensure smooth deployment cycles Collaborate effectively in distributed teams and maintain clear documentation Minimum Qualifications: Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience 1–3 years of hands-on experience in software development or DevOps engineering Key Skills We’re Looking For: Google Cloud Platform (GCP) services Kubernetes and containerization tools Basic to intermediate Node.js development (especially REST APIs/backend services) Apache Kafka (publishing/consuming messages) PostgreSQL or similar RDBMS Git, GitHub, and collaborative workflows Excellent troubleshooting, problem-solving, and team collaboration skills Good to Have: Experience with CI/CD pipelines (e.g., Jenkins, GitHub Actions) Familiarity with Agile/Scrum methodologies Exposure to observability tools (Prometheus, Grafana, ELK, etc.) Why Join Us? Work on impactful, production-grade cloud solutions Collaborate with highly skilled teams across geographies Gain experience across cutting-edge DevOps stacks Fast-paced, learning-rich environment with room to grow Job Types: Full-time, Permanent Pay: ₹1,000,000.00 - ₹1,400,000.00 per year Schedule: Day shift Monday to Friday Ability to commute/relocate: Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Google Cloud Platform: 2 years (Required) Kubernetes: 1 year (Required) Node.js: 1 year (Preferred) Work Location: In person

Posted 2 days ago

Apply

9.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!!! TCS is hiring for Big Data Architect Location - PAN India Years of Experience - 9-14 years Job Description- Experience with Python, Spark, and Hive data pipelines using ETL processes Apache Hadoop development and implementation Experience with streaming frameworks such as Kafka Hands on experience in Azure/AWS/Google data services Work with big data technologies (Spark, Hadoop, BigQuery, Databricks) for data preprocessing and feature engineering. Show more Show less

Posted 2 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title :Data Engineer Key Skills :Python , ETL, Snowflake, Apache Airflow Job Locations : Pan India. Experience : 6-7 Education Qualification : Any Graduation. Work Mode : Hybrid. Employment Type : Contract. Notice Period : Immediate Job Description: 6 to 10 years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka , AWS SQS , or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment , collaborating with cross-functional teams. Preferred Skills: Familiarity with other programming languages like Scala or Java for data engineering tasks. Knowledge of containerization and orchestration technologies (Docker, Kubernetes). Experience with stream processing frameworks like Apache Flink . Experience with Apache Iceberg for data lake optimization and management. Exposure to machine learning workflows and integration with data pipelines. Soft Skills: Strong problem-solving skills with a passion for solving complex data challenges. Excellent communication and collaboration skills to work with cross-functional teams. Ability to thrive in a fast-paced, innovative environment. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Aramya, we’re redefining fashion for India’s underserved Gen X/Y women, offering size-inclusive, comfortable, and stylish ethnic wear at affordable prices. Launched in 2024, we’ve already achieved ₹40 Cr in revenue in our first year, driven by a unique blend of data-driven design, in-house manufacturing, and a proprietary supply chain. Today, with an ARR of ₹100 Cr, we’re scaling rapidly with ambitious growth plans for the future. Our vision is bold to build the most loved fashion and lifestyle brands across the world while empowering individuals to express themselves effortlessly. Backed by marquee investors like Accel and Z47, we’re on a mission to make high-quality ethnic wear accessible to every woman. We’ve built a community of loyal customers who love our weekly design launches, impeccable quality, and value-for-money offerings. With a fast-moving team driven by creativity, technology, and customer obsession, Aramya is more than a fashion brand—it’s a movement to celebrate every woman’s unique journey. We’re looking for a passionate Data Engineer with a strong foundation. The ideal candidate should have a solid understanding of D2C or e-commerce platforms and be able to work across the stack to build high-performing, user-centric digital experiences. Key Responsibilities Design, build, and maintain scalable ETL/ELT pipelines using tools like Apache Airflow, Databricks , and Spark . Own and manage data lakes/warehouses on AWS Redshift (or Snowflake/BigQuery). Optimize SQL queries and data models for analytics, performance, and reliability. Develop and maintain backend APIs using Python (FastAPI/Django/Flask) or Node.js . Integrate external data sources (APIs, SFTP, third-party connectors) and ensure data quality & validation. Implement monitoring, logging, and alerting for data pipeline health. Collaborate with stakeholders to gather requirements and define data contracts. Maintain infrastructure-as-code (Terraform/CDK) for data workflows and services. Must-Have Skills Strong in SQL and data modeling (OLTP and OLAP). Solid programming experience in Python , preferably for both ETL and backend. Hands-on experience with Databricks , Redshift , or Spark . Experience with building and managing ETL pipelines using tools like Airflow , dbt , or similar. Deep understanding of REST APIs , microservices architecture, and backend design patterns. Familiarity with Docker , Git, CI/CD pipelines. Good grasp of cloud platforms (preferably AWS ) and services like S3, Lambda, ECS/Fargate, CloudWatch. Nice-to-Have Skills Exposure to streaming platforms like Kafka, Kinesis, or Flink. Experience with Snowflake , BigQuery , or Delta Lake . Proficient in data governance , security best practices, and PII handling. Familiarity with GraphQL , gRPC , or event-driven systems. Knowledge of data observability tools (Monte Carlo, Great Expectations, Datafold). Experience working in a D2C/eCommerce or analytics-heavy product environment. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Title: Senior Java Developer Location: Airoli, Mumbai (Onsite) Industry: BFSI / Fintech About the Role We are looking for a highly skilled and passionate Senior Java Developer with strong hands-on experience in developing scalable, high-performance applications. You will play a critical role in building low-latency, high-throughput systems focused on risk and fraud monitoring in the BFSI domain. Key Responsibilities Design and develop microservices using Java (latest versions) , Spring Boot , and RESTful APIs Build robust data streaming solutions using Apache Flink and Kafka Implement business rules using Drools Rule Engine Contribute to the development of low-latency, high-throughput platforms for fraud detection and risk monitoring Participate in Agile development, code reviews, and CI/CD pipelines with a strong focus on Test-Driven Development (TDD) Debug complex issues and take full ownership from design to deployment Collaborate with cross-functional teams and participate in cloud-native development using AWS (IaaS / PaaS) Required Skills Java , Spring Boot , REST APIs , Virtual Threads Apache Flink , Kafka – real-time data stream processing Drools Rule Engine Strong grasp of J2EE , OOP principles , and Design Patterns Experience working with CI/CD tools , Git , Quay , TDD Familiarity with Cloud-native solutions, especially in AWS environments Preferred Experience BFSI / Fintech domain experience in building risk and fraud monitoring applications Exposure to Agile methodology and tools like JIRA Solid communication skills and strong sense of ownership Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Req ID: 26743 Includes the following essential duties and responsibilities (other duties may also be assigned): Supermicro seeks qualified QA manager with hands-on experience to create and enforce web-based products. As a QA manager, you will leverage your expert technical knowledge and past implementation experience in developing processes and standards to build our new cloud solution based on the latest industry cloud software development technologies such as LAMP stack (Linux, Apache, Python, Mysql, etc.). You will develop comprehensive test plans, strategies, and schedules for enterprise-scale requirements and lead its initial adoption across various test cases. You’ll be responsible for managing the lab hardware and quality processes. You will need an excellent understanding of infrastructure operations, tools, and patterns used in an agile development continuous delivery environment. Monitor and report using test tools for automated, manual and regression test Knowledgeable of VDBench and Jenkins Skilled in understanding and deploying cloud technologies About Supermicro Supermicro® is a Top Tier provider of advanced server, storage, and networking solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/ Big Data, Hyperscale, HPC and IoT/Embedded customers worldwide. We are the #5 fastest growing company among the Silicon Valley Top 50 technology firms. Our unprecedented global expansion has provided us with the opportunity to offer a large number of new positions to the technology community. We seek talented, passionate, and committed engineers, technologists, and business leaders to join us. Job Summary Monitor and report using test tools for automated, manual and regression test Knowledgeable of VDBench and Jenkins Skilled in understanding and deploying cloud technologies Qualifications Education and/or Experience: BS/MS EE, CE, ME 5+ years of quality assurance expertise Experience with Agile development tools (Redmine, Git) Confident presenter, and strong influencer; able to adapt level and style to the audience EEO Statement Supermicro is an Equal Opportunity Employer and embraces diversity in our employee population. It is the policy of Supermicro to provide equal opportunity to all qualified applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, protected veteran status or special disabled veteran, marital status, pregnancy, genetic information, or any other legally protected status. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

5 years of experience as a Data Engineer or a similar role. Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Strong knowledge of data engineering tools and technologies (e.g. SQL, ETL, data warehousing). Experience with data pipeline frameworks and data processing platforms (e.g. Apache Kafka, Apache Spark). Proficiency in programming languages such as Python, Java, or Scala. Experience with cloud platforms (e.g. AWS, Google Cloud Platform, Azure). Knowledge of data modeling, database design, and data governance. Mongo DB Is Must Show more Show less

Posted 2 days ago

Apply

6.0 - 8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

6 - 8 years of experience as a full stack Java developer Expertise on UI Frameworks like Angular JS, Node JS, React JS, bootstrap. Must have Apache Camel hands on expericence. Experience on front end technologies using HTML5, CSS3, Bootstrap & SASS Experience in JavaScript/TypeScript such as, ReactJs , NodeJs, and AngularJs In-depth experience in responsive web design and development. Hands on experience in Linux/Unix Platform with knowledge day to day routine commands. Java, SOA and Web Services (REST/SOAP) required. Knowledge of DevOps process with enterprise architecture. Experience in Java 8, Spring, Spring Boot, Microservices, ORM Tools, and Cloud technologies Experience of Java Microservices architecture Experience with designing, implementing, and deploying microservices in distributed systems Good to have knowledge and experience of deploying to application in AWS Cloud using Jenkins, Docker Strong knowledge and experience with SQL queries and databases like PostgreSQL / SQL Server. Familiarity with a source control system (GitHub, SVN, etc.) Experience in unit testing code with JEST / enzyme / Jasmine / Mocha / Chai is desired Experience in agile delivery and tools like Jira Show more Show less

Posted 2 days ago

Apply

Exploring Apache Jobs in India

Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.

Average Salary Range

The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect

Related Skills

Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing

Interview Questions

  • What is Apache HTTP Server and how does it differ from Apache Tomcat? (medium)
  • Explain the difference between Apache Hadoop and Apache Spark. (medium)
  • What is mod_rewrite in Apache and how is it used? (medium)
  • How do you troubleshoot common Apache server errors? (medium)
  • What is the purpose of .htaccess file in Apache? (basic)
  • Explain the role of Apache Kafka in real-time data processing. (medium)
  • How do you secure an Apache web server? (medium)
  • What is the significance of Apache Maven in software development? (basic)
  • Explain the concept of virtual hosts in Apache. (basic)
  • How do you optimize Apache web server performance? (medium)
  • Describe the functionality of Apache Solr. (medium)
  • What is the purpose of Apache Camel? (medium)
  • How do you monitor Apache server logs? (medium)
  • Explain the role of Apache ZooKeeper in distributed applications. (advanced)
  • How do you configure SSL/TLS on an Apache web server? (medium)
  • Discuss the advantages of using Apache Cassandra for data management. (medium)
  • What is the Apache Lucene library used for? (basic)
  • How do you handle high traffic on an Apache server? (medium)
  • Explain the concept of .htpasswd in Apache. (basic)
  • What is the role of Apache Thrift in software development? (advanced)
  • How do you troubleshoot Apache server performance issues? (medium)
  • Discuss the importance of Apache Flume in data ingestion. (medium)
  • What is the significance of Apache Storm in real-time data processing? (medium)
  • How do you deploy applications on Apache Tomcat? (medium)
  • Explain the concept of .htaccess directives in Apache. (basic)

Conclusion

As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies