Jobs
Interviews

10949 Apache Jobs - Page 49

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Join our Team About this opportunity: With the introduction of 5G and cloud, the role of IT Managed Services has evolved to become an enabler of new revenue opportunities, in addition to delivering efficient cloud and IT operations for service providers on their 5G journey. Join us to understand how different technologies come together to build a best-in-class solution which has made Ericsson lead the 5G evolution. We will also explain how you can be part of this outstanding culture and advance your career while creating a global impact. We believe in trust – we trust each other to do the right things! We believe in taking decisions as close to the product and technical expertise as possible. We believe in creativity – trying new things and learning from our mistakes. We believe in sharing our insights and helping one another to build an even better user plane. What you will do: Design, Develop and consume REST APIs efficiently using Java and Spring boot. Implement robust Object-Oriented Programming (OOP) principles. Leverage multithreading for concurrent programming tasks to optimize application performance. Integrate and work with Kafka Message Bus using the confluent-kafka Python library. Write and maintain high-quality unit tests using JUNIT for thorough test coverage. Build and containerize applications using Docker; and deploy them to Kubernetes clusters with Helm. Collaborate using version control systems like GitLab and contribute to CI/CD pipelines (knowledge of GitLab CI is a plus). The skills you bring: Minimum years of relevant Experience: 15 to 20 Deep knowledge of microservices architecture and REST API design using Java and Spring boot. Proficiency with containerization and orchestration tools (Docker, Kubernetes, Helm). Familiarity with software development lifecycle tools and processes, especially in Agile environments. Experience in product development and familiarity with *nix based operating systems Mandatory skills: Java SE/EE including Spring Boot, Micro Services, Linux, Python, C++, Multi Threaded, Cloud Native Architecture, DevOps, GitLab, Kafka, Massive data streaming and Processing, Experience with GitLab CI pipelines. Experience in working with Apache Kafka or Confluent Kafka for message bus integration. Contributions to open-source projects. Exposure to Python and C++ Experience with cloud native architecture and development Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 766688

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

What we do We are currently in the process of building the next generation of cargo management applications using modern and widely use technologies and we are looking for motivated people to join our team of interdisciplinary developers. What we currently use: We build backend services with Java, Spring Boot, Web Services, Mongo DB We integrate with existing core Java Cargo applications via REST APIs We build frontends with Angular and Ionic framework for mobile apps We deploy to Linux servers, private datacenters, AWS, using Ansible & Maven We do continuous integration with Gitlab/Bamboo We use Scrum to organize ourselves What we expect from you: Bachelor's Degree in Information Technology, Computer Science, Computer Engineering, or equivalent Proven experience as a MongoDB DBA or similar role (4+ years recommended) Strong understanding of MongoDB architecture, including sharding, replication, and indexing Experience working with MongoDB Atlas or self-managed clusters Proficiency with Linux systems and shell scripting Familiarity with monitoring tools and performance tuning techniques Experience with backup and disaster recovery processes Review the current Debezium deployment architecture, including Oracle connector configuration, Kafka integration, and downstream consumers. Analyze Oracle database setup for CDC compatibility (e.g., redo log configuration, supplemental logging, privileges). Evaluate connector performance, lag, and error handling mechanisms. Identify bottlenecks, misconfigurations, or anti-patterns in the current implementation. Provide a detailed report with findings, best practices, and actionable recommendations. Optionally, support implementation of recommended changes and performance tuning. What we require from you 4+ years of experience as a MongoDB DBA in production environments Deep expertise in MongoDB architecture, including replication, sharding, backup, and recovery Strong hands-on experience with Debezium, especially the Oracle connector (LogMiner). Deep understanding of Oracle internals relevant to CDC: redo logs, SCNs, archive log mode, supplemental logging. Proficiency with Apache Kafka and Kafka ecosystem tools. Experience with monitoring and debugging Debezium connectors in production environments. Ability to analyze logs, metrics, and connector configurations to identify root causes of issues. Strong documentation and communication skills for delivering technical assessments.

Posted 1 week ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

Pune

Work from Office

Shift: Rotational (24x7 support) Job Summary We are seeking a dedicated L1 Linux Support Engineer to provide frontline operational support for enterprise Linux servers. The engineer will focus primarily on L1 responsibilities, but must also have basic to intermediate understanding of L2 tasks for occasional escalated activity handling and team backup. Key Responsibilities L1 Responsibilities (Primary): Monitor system performance, server health, and basic services using tools like Nagios, Zabbix, or similar. Handle tickets for standard issues like disk space, service restarts, log checks, user creation, and permission troubleshooting. Basic troubleshooting of server access issues (SSH, sudo access, etc.). Perform routine activities such as patching coordination, backup monitoring, antivirus checks, and compliance tasks. Execute pre-defined SOPs and escalation procedures in case of critical alerts or failures. Regularly update incident/ticket tracking systems (e.g., ServiceNow, Remedy). Provide hands-and-feet support at Data Center if required. L2 Awareness (Secondary / Occasional Tasks): Understand LVM management, disk extension, and logical volume creation. Awareness of service and daemon-level troubleshooting (Apache, NGINX, SSH, Cron, etc.). Ability to assist in OS patching, kernel updates, and troubleshooting post-patch issues. Exposure to basic scripting (Bash, Shell) to automate repetitive tasks. Familiarity with tools like Red Hat Satellite, Ansible, and centralized logging (e.g., syslog, journalctl). Understand basic clustering, HA concepts, and DR readiness tasks. Assist L2 team during major incidents or planned changes. Required Skills Hands-on with RHEL, CentOS, Ubuntu or other Enterprise Linux distributions. Basic knowledge of Linux command-line tools, file systems, and system logs. Good understanding of Linux boot process, run levels, and systemd services. Basic networking knowledge (ping, traceroute, netstat, etc.). Familiar with ITSM tools and ticketing process. Nice to Have RHCSA certification (preferred). Exposure to virtualization (VMware, KVM) and cloud environments (AWS, Azure). Experience with shell scripting or Python for automation. Understanding of ITIL framework. Soft Skills Strong communication and coordination skills. Ability to follow instructions and SOPs. Willingness to learn and take ownership of tasks. Team player with a proactive mindset.

Posted 1 week ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Pune, Maharashtra

On-site

Key Responsibilities: · The primary responsibility for this role will be to contribute to the design and perform the implementation of systems and solutions on AWS and other cloud infrastructures. · Should have Experience building and supporting enterprise platforms. · Ensure application performance, uptime, and scale, maintaining high standards of code quality and thoughtful design. · Should be able to manage cloud environments in accordance with customers security guidelines. · Should be able to troubleshoot incidents, identify root cause, fix and document problems, and implement preventive measures. · Should be Identify cloud budget savings opportunities through improved configuration of existing systems. · Should be able to Monitor for and review alerts and notifications related to cloud service events. · Should be able to employ exceptional problem-solving skills, with the ability to see and solve issues before they affect business productivity Required skills. Technical Experience: · Should have in depth knowledge of UNIX/Linux or Windows Environment · Should be manage and maintain cloud services and settings using cloud vendor console for providers like AWS, AZURE, GCP, DigitalOcean, Hostinger, etc. · Should be able to do end to end configuration - trouble shooting of the web and application servers (Nginx, Apache, Tomcat, IIS) · Proven experience with AWS/Azure/Google Cloud platforms, micro-services, RESTful API-based architecture, and networking. · Should have good hands-on experience in AWS Cloud services EC2, S3, CloudWatch, CDN- CloudFront, etc. · Should be able to Configure, backup & disaster recovery and troubleshoots issues related to backup and DR Experience -3 -5 Years B.E/B.TECH Node.js Application Deployment experience Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹500,000.00 per year Application Question(s): Can you join immediately? Do you have experience in Node.js app deployment? Education: Bachelor's (Required) Experience: total work: 3 years (Required) Location: Pune, Maharashtra (Required) Work Location: In person Application Deadline: 02/08/2025

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Good To Have Skills: Experience with data processing frameworks. - Strong understanding of distributed computing principles. - Familiarity with cloud platforms and services. - Experience in developing and deploying applications in a microservices architecture. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Chennai office. - A 15 years full time education is required.

Posted 1 week ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. The HCL Unica+ Marketing Platform enables our customers to deliver precision and high performance Marketing campaigns across multiple channels like Social Media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. We are seeking a Senior Architect Developer with strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Responsibilities Designing and Architecting End-to-End AI/ML Solutions for Marketing: The architect is responsible for designing robust, scalable, and secure AI/ML solutions specifically tailored for marketing challenges. This includes defining data pipelines, selecting appropriate machine learning algorithms and frameworks (e.g., for predictive analytics, customer segmentation, personalization, campaign optimization, sentiment analysis), designing model deployment strategies, and integrating these solutions seamlessly with existing marketing tech stacks and enterprise systems. They must consider the entire lifecycle from data ingestion to model monitoring and retraining. Technical Leadership: The AI/ML architect acts as a technical leader, providing guidance and mentorship to data scientists, ML engineers, and other development teams. They evaluate and select the most suitable AI/ML tools, platforms, and cloud services (AWS, GCP, Azure) for marketing use cases. The architect is aso responsible for establishing and promoting best practices for MLOps (Machine Learning Operations), model versioning, continuous integration/continuous deployment (CI/CD) for ML models, and ensuring data quality, ethical AI principles (e.g., bias, fairness), and regulatory compliance (e.g., data privacy laws). Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms: In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Qualifications & Skills At least 15+ years of Experience across Data Architecture, Data Science and Machine Learning. Experience in delivering AI/ML models for Marketing Outcomes like Customer Acquisition, Customer Churn, Next Best Product or Offer. This is a mandatory requirement. Experience with Customer Data Platforms (CDP) and Marketing Platforms like Unica, Adobe, SalesForce, Braze, TreasureData, Epsilon, Tealium is mandatory. Experience with AWS SageMaker is advantageous Experience with LangChain, RAG for Generative AI is advantageous. Experience with ETL process and tools like Apache Airflow is advantageous Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Ability to work well within an agile team environment and apply the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Travel: 30% +/- travel required

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

SLSQ126R545 As an Enterprise Account Executive at Databricks, you are a sales professional experienced in leading go-to-market campaigns in one of the largest Banking institutions in India. You know how to sell innovation and change through customer vision expansion and can guide deals forward to compress decision cycles. You love understanding a product in depth and are passionate about communicating value to Customers and System Integrators. Databricks operates at the leading edge of the Unified Data Analytics and AI space. Our customers turn to us to lead the accelerated innovation that their businesses need to gain a first mover advantage in today’s ultra-competitive landscape. As we continue our rapid expansion, we are looking for a creative, execution-oriented Enterprise Account Executive to join the Retail & CPG team and maximize the phenomenal market opportunity that exists for Databricks. Reporting to our Director of Enterprise Sales, you will manage a strategic enterprise client in the BFSI vertical. Your informed perspective on Big Data, Advanced Analytics, and AI will help guide your successful execution strategy and allow you to provide genuine value to the client. The Impact You Will Have Present a territory plan within the first 90 days Meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners Close both new accounts and existing accounts Identify and close quick, small wins while managing longer, complex sales cycles Exceed activity, pipeline, and revenue targets Track all customer details, including use case, purchase time frames, next steps, and forecasting in Salesforce Use a solution-based approach to selling and creating value for customers Promote Databricks' enterprise cloud data platform powered by Apache Spark Ensure 100% satisfaction among all customers Prioritize opportunities and apply appropriate resources Build a plan for success internally at Databricks and externally with your accounts What We Look For Previous field sales experience within big data, Cloud, SaaS, and a consumption selling motion Prior customer relationships with CIOs, program managers, and essential decision makers at local accounts The ability to simplify a technical capability into a value-based benefit 7+ years of Enterprise Sales experience exceeding quotas in BFSI accounts like ICICI Bachelor's Degree About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Java Enterprise Edition Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will work on developing innovative solutions to enhance user experience and streamline processes. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with cross-functional teams to analyze user needs and design efficient applications. - Develop high-quality software design and architecture. - Write clean, scalable code using Java EE programming languages. - Test and deploy applications and systems. - Troubleshoot, debug and upgrade existing software. Professional & Technical Skills: - Must To Have Skills: Proficiency in Java Enterprise Edition. - Strong understanding of software development lifecycle. - Experience with web application development using Java EE technologies. - Knowledge of relational databases and SQL. - Hands-on experience with application servers like Apache Tomcat or JBoss. Additional Information: - The candidate should have a minimum of 3 years of experience in Java Enterprise Edition. - This position is based at our Coimbatore office. - A 15 years full time education is required.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

SLSQ126R458 As an Enterprise Account Executive at Databricks, you are a sales professional experienced in leading go-to-market campaigns in a few of the largest Indian conglomerates. You know how to sell innovation and change through customer vision expansion and can guide deals forward to compress decision cycles. You love understanding a product in depth and are passionate about communicating value to Customers and System Integrators. Databricks operates at the cutting edge of the Unified Data Analytics and AI space. Our customers turn to us to lead the accelerated innovation that their businesses need to gain a first-mover advantage in today’s ultra-competitive landscape. As we continue our rapid expansion, we are looking for a creative, execution-oriented Enterprise Account Executive to join us and maximize the phenomenal market opportunity that exists for Databricks. Reporting to our Director of Enterprise Sales, you will manage a strategic enterprise vertical. Your informed perspective on Big Data, Advanced Analytics, and AI will help guide your successful execution strategy and allow you to provide genuine value to the client. The Impact You Will Have Present a territory plan within the first 90 days Meet with CIOs, IT executives, LOB executives, Program Managers, and other important partners Close both new accounts and existing accounts Identify and close quick, small wins while managing longer, complex sales cycles Exceed activity, pipeline, and revenue targets Track all customer details, including use case, purchase time frames, next steps, and forecasting in Salesforce Use a solution-based approach to selling and creating value for customers Promote Databricks' enterprise cloud data platform powered by Apache Spark Ensure 100% satisfaction among all customers Prioritize opportunities and apply appropriate resources Build a plan for success internally at Databricks and externally with your accounts. What We Look For Previous field sales experience within big data, Cloud, SaaS, and a consumption selling motion Prior customer relationships with CIOs, program managers, and essential decision makers at local accounts The ability to simplify a technical capability into a value-based benefit 7+ years of Enterprise Sales experience exceeding quotas in larger accounts (preferably with Indian conglomerate like Reliance.) Managing a small set of enterprise accounts rather than a broad territory Bachelor's Degree About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Summary We are seeking a highly skilled and motivated Lead DevOps Engineer with Solution Architect expertise to manage end-to-end infrastructure projects across cloud, hybrid, and dedicated server environments. This role demands hands-on experience with WHM/cPanel, OpenPanel, load balancers , and deep knowledge of modern DevOps practices. The ideal candidate will also lead a team of DevOps engineers, drive technical excellence, and serve as the go-to expert for scalable, secure, and high-availability infrastructure solutions. Key Responsibilities DevOps & Infrastructure Management Architect, implement, and maintain scalable infrastructure solutions across cloud and dedicated server environments. Manage hosting infrastructure including WHM/cPanel, OpenPanel, Apache/Nginx, MySQL, DNS, mail servers, and firewalls. Design and configure load balancing strategies using HAProxy, NGINX, or cloud-native load balancers. Automate provisioning, configuration, deployment, and monitoring using tools like Ansible, Terraform, CI/CD (Jenkins, GitLab CI). Ensure infrastructure reliability, security, and disaster recovery processes are in place. Solution Architecture Translate business and application requirements into robust infrastructure blueprints. Lead design reviews and architectural discussions for client and internal projects. Create documentation and define architectural best practices for hosting and DevOps. Team Management & Leadership Lead and mentor a team of DevOps engineers across multiple projects. Allocate resources, manage project timelines, and ensure successful delivery. Foster a culture of innovation, continuous improvement, and collaboration. Conduct performance reviews, provide training, and support career development of team members. Monitoring, Security & Optimization Set up and maintain observability systems (e.g., Prometheus, Grafana, Zabbix). Conduct performance tuning, cost optimization, and environment hardening. Ensure compliance with internal policies and external standards (ISO, GDPR, SOC2, etc.). Required Skills & Experience 8+ years of experience in DevOps, systems engineering, or cloud infrastructure management. 3+ years of experience in team leadership or technical management. Proven expertise in hosting infrastructure, including WHM/cPanel, OpenPanel, Plesk, DNS, and mail configurations. Strong experience with Linux servers, networking, security, and automation scripting (Bash, Python). Hands-on experience with cloud platforms (AWS, Azure, GCP) and hybrid environments. Deep understanding of CI/CD pipelines, Docker/Kubernetes, and version control (Git). Familiarity with load balancing, high availability, and failover strategies. Preferred Qualifications Certifications such as AWS Solutions Architect, RHCE, CKA, or Linux Foundation Certified Engineer. Experience in IT services or hosting/cloud consulting environments. Knowledge of compliance frameworks (e.g., ISO 27001, SOC 2, PCI-DSS). Familiarity with agile methodologies and DevOps lifecycle management tools.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Company Description York IE™ is the USA-based vertically integrated strategic growth and investment firm helping reshape the way companies are built, scaled, and monetized. We are driven by SaaS operational experience with complementary expertise to support startups & help them build their product to find the best product market fit. To know more about us visit www.york.ie Job Description We are currently seeking a highly proficient Full Stack Developer with a focus on Node.js & React.js, possessing strong expertise in JavaScript and both frontend and backend development, to become a valued member of our accomplished development team. The ideal candidate will play a key role in the design, creation, and upkeep of exceptional web applications that not only captivate our users but also align with business needs. Responsibilities: Design, develop, and maintain efficient, reusable, and reliable code across both frontend and backend using React.js and Node.js, following established coding standards and practices. Collaborate seamlessly with cross-functional teams, including product managers, UX/UI designers, and fellow developers, to deliver high-quality, integrated web applications. Build and maintain robust, high-performance APIs that facilitate communication between frontend applications and backend services. Implement secure, reliable, and scalable data storage solutions using databases like MongoDB, MySQL, or PostgreSQL, ensuring data integrity. Proactively identify, troubleshoot, and resolve performance issues, bugs, and other technical challenges across both frontend and backend. Participate actively in code and design reviews, offering valuable feedback and suggestions for enhancing overall system quality. Stay abreast of industry trends, emerging technologies, and best practices in both frontend (React.js) and backend (Node.js) development. Mentor and guide junior developers, cultivating a culture of continuous learning, growth, and improvement within the development team. Collaborate closely with stakeholders to gather and translate requirements into comprehensive technical specifications, ensuring alignment between business needs and technical implementations. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Minimum of 5 years of professional experience in both frontend and backend development, with a strong focus on React.js and Node.js. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3. Extensive expertise in React.js principles and core concepts, including component lifecycle, state management, and hooks. Solid understanding of Node.js frameworks, including Express, Koa, or NestJs. Proficiency in building and maintaining RESTful APIs, adhering to design principles and best practices. Familiarity with both relational (MySQL, PostgreSQL) and NoSQL (MongoDB) databases. Knowledge of modern development tools like Git, Docker, and CI/CD pipelines. Experience working with GraphQL or other API technologies. Strong analytical and problem-solving skills for both frontend and backend challenges. Proven track record in optimizing application performance and adhering to best practices in both frontend and backend. Agile/Scrum development experience, with the ability to adapt to evolving project requirements. Nice to have: Familiarity with design systems, component libraries, or UI frameworks like Material-UI or AntDesign, streamlining frontend development. Proficiency with modern frontend frameworks, including Next.js, Astro, or Remix, to enhance web application performance. Knowledge of AWS CloudFormation, facilitating the creation and management of AWS resources as code. Exposure to message brokers, such as RabbitMQ or Apache Kafka. Proficiency with serverless architectures, such as AWS Lambda or Google Cloud Functions. Experience with microservices architecture and container orchestration tools such as Kubernetes or Docker Swarm, contributing to scalable backend solutions. Perks & Benefits: Flexible work timings to accommodate personal and professional needs. 1 Week paid vacation in July + other floating leave policies. Comprehensive medical insurance coverage, not a part of your CTC. Regular team lunches to foster camaraderie and collaboration. In-house dry pantry for convenient snacking and refreshments.

Posted 1 week ago

Apply

1.0 - 3.0 years

4 - 7 Lacs

Mumbai

Work from Office

Role Purpose The purpose of the role is to resolve, maintain and manage clients software/ hardware/ network based on the service requests raised from the end-user as per the defined SLAs ensuring client satisfaction Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of clients network/ server/ system/ storage/ platform/ infrastructure and other equipments to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner Mandatory Skills: WebLogic Admin. Experience: 1-3 Years.

Posted 1 week ago

Apply

2.0 - 4.0 years

2 - 5 Lacs

Navi Mumbai

Work from Office

Installation, configuration of Apache Webservers Working knowledge on Linux servers Good technical and analytical skills Troubleshooting of Apache, HTTP, SSL, and TomCat B.Tech / B.E 24 * 7 Shift

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks, Apache Spark, Microsoft Azure Data Services Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking opportunities for improvement and innovation in application design and functionality. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks, Apache Spark, Microsoft Azure Data Services. - Strong understanding of data integration techniques and ETL processes. - Experience with application development frameworks and methodologies. - Familiarity with cloud computing concepts and services. - Ability to troubleshoot and optimize application performance. Additional Information: - The candidate should have minimum 3 years of experience in Microsoft Azure Databricks. - This position is based in Pune. - A 15 years full time education is required.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

We are seeking Python Developers with 2-5 years of experience in implementing, deploying, and scaling solutions. Key Responsibilities: Collaborate with clients and project teams to understand business requirements, create solutions, and develop efficient, high-quality code that meets or exceeds client expectations. Optimize application performance across multiple delivery platforms, including AWS, Azure, and GCP. Design and implement low-latency, high-availability, and high-performance applications using Django, Flask, or FastAPI. Lead the integration of front-end user interface elements with server-side logic. Integrate multiple data sources and databases into a unified system while ensuring proper data storage and third-party library integration. Create scalable and optimized database schemas tailored to business logic. Handle large datasets from databases or via HTTP(S)/WebSockets. Conduct thorough testing using pytest and unittest, and perform debugging to ensure applications run smoothly. Provide mentorship and guidance to junior developers. Communicate effectively with clients regarding project updates and technical solutions. Skills & Qualifications: 3+ years of experience as a Python developer with strong client communication and team leadership skills. In-depth knowledge of Python frameworks such as Django, Flask, and FastAPI. Strong understanding of cloud technologies, including AWS, Azure, and GCP. Deep understanding of microservices and multi-tenant architecture. Familiarity with serverless computing (AWS Lambda, Azure Functions). Experience with deployment using Docker, Nginx, Gunicorn, and Uvicorn. Hands-on experience with SQL and NoSQL databases such as PostgreSQL and AWS DynamoDB. Strong understanding of coding design patterns and SOLID principles. Experience with Object-Relational Mappers (ORMs) such as SQLAlchemy and Django ORM. Ability to handle multiple API integrations and write modular, reusable code. Experience with front-end technologies such as React, Vue, and HTML/CSS/JS (preferred). Proficiency in authentication and authorization mechanisms across multiple systems. Understanding of scalable application design principles and event-driven programming. Strong skills in unit testing, debugging, and code optimization. Experience with Agile/Scrum methodologies. Experience in Langchain, AWS bedrock. (Preferred) Familiarity with container orchestration tools like Kubernetes. Understanding of data processing frameworks like Apache Kafka and Spark (preferred). Experience with CI/CD pipelines and automation tools like Jenkins, GitLab CI, or CircleCI.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Data Streaming Engineer - Experience 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering - Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are hiring a HPC (High Performance Computing) Engineer who can work onsite from Chennai Job Title: Technical Lead – HPC (High Performance Computing) Location: KLA, Chennai, India Experience: 6–10 Years Education: BE/BTech/MCA/MSc (No Diploma/BCA/BSc) Industry: Semi-Conductor Key Responsibilities Design, implement, and support HPC clusters (CPU/GPU-based) Manage BOMs, vendors, and hardware release cycles Configure and optimize Linux-based HPC systems Ensure project delivery and support manufacturing with quality deliverables Develop scripts (Shell/Python) and automation tools Required Skills Expertise in Linux (SuSE, RedHat, Rocky, Ubuntu) Strong hardware knowledge: servers, GPUs, storage, networking, BIOS/BMC Familiarity with PXE boot, Linux HA, TCP/IP, DNS, DHCP Experience with config management tools (Salt, Chef, Puppet) Preferred Skills DevOps tools: Jenkins, Git, Docker/Singularity Kubernetes, Prometheus, Grafana Web server & proxy setup: Apache/Nginx, HA Proxy

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

We are hiring a Data Engineer. If you are interested, please feel free to share your CV to SyedaRashna@lancesoft.com Job title: Data Engineer Location: India - Remote Duration: 6 Months Description: We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5 Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

Remote

About Granica Granica is redefining how enterprises prepare and optimize data at the most fundamental layer of the AI stack—where raw information is transformed into usable intelligence. We’re built to streamline cloud infrastructure, reduce storage and compute costs, and accelerate data pipelines, therefore helping companies turn massive raw datasets into intelligent, usable fuel for AI. In short: we build better data for better AI . Smarter Infrastructure for the AI Era: We make data efficient, safe, and ready for scale—think smarter, more foundational infrastructure for the AI era. Our technology integrates directly with modern data stacks like Snowflake, Databricks, and S3-based data lakes, enabling: 60%+ reduction in storage costs and up to 60% lower compute spend 3x faster data processing 20% platform efficiency gains Trusted by Industry Leaders Enterprise leaders globally already rely on Granica to cut costs, boost performance, and unlock more value from their existing data platforms. A Deep Tech Approach to AI We’re unlocking the layers beneath platforms like Snowflake and Databricks, making them faster, cheaper, and more AI-native. We combine advanced research with practical productization, powered by a dual-track strategy: Research: Led by Chief Scientist Andrea Montanari (Stanford Professor), we publish 1–2 top-tier papers per quarter. Product: Actively processing 100+ PBs today and targeting Exabyte scale by Q4 2025. Backed by the Best We’ve raised $60M+ from NEA, Bain Capital, A Capital, and operators behind Okta, Eventbrite, Tesla, and Databricks. Our Mission To convert entropy into intelligence, so every builder—human or AI—can make the impossible real. We’re building the default data substrate for AI, and a generational company built to endure beyond any single product cycle. Job Summary We are looking for a SDET (QA Automation Engineer) with hands-on experience in backend testing using Python , and working knowledge of Kubernetes , Apache Spark , and data lake architectures . In this role, you’ll collaborate with engineers across product and platform teams to ensure the quality of services powering our data-driven AI infrastructure. What You’ll Do Test Automation Design, develop, and maintain automated test scripts using industry-standard tools and frameworks Create and execute comprehensive test plans for APIs and big data applications Implement automated regression, functional, integration, and performance testing Develop and maintain test data management strategies Create reusable test components and maintain test automation frameworks Quality Assurance Perform manual testing when required, including exploratory and usability testing Identify, document, and track software defects using bug tracking tools Collaborate with developers to reproduce and resolve issues Conduct root cause analysis for test failures and production issues Ensure compliance with quality standards and testing methodologies Process Improvement Integrate automated tests into CI/CD pipelines Provide testing estimates and ensure timely delivery of testing milestones Continuously evaluate and implement new testing tools and methodologies What We’re Looking For 6–10 years of experience in backend test automation with a strong focus on Python . Experience working in distributed systems , data engineering, or infrastructure-heavy environments. Familiarity with Apache Spark and related big data technologies. Hands-on experience with Kubernetes for container orchestration and test environment setup. Solid understanding of data lakes , including experience with formats (Parquet, ORC), storage layers, or lakehouse platforms. Experience with REST API testing , data validation, and large-scale test data management. Comfortable with tools like Pytest , Postman , Git , Jenkins , or similar CI/CD tools. Strong debugging and problem-solving skills in cloud-native environments. Nice-to-Haves Background in data infrastructure, machine learning pipelines, or systems programming Familiarity with distributed systems concepts (e.g., compression, storage tiering, streaming data) Experience working in a startup or fast-paced technical environment Experience with Kubernetes, Terraform, or infrastructure as code tools Comfort with performance tuning, benchmarking, and systems observability Why Granica? Work hands-on with petabyte-scale datasets , design performant systems and compression algorithms that matter Partner with elite engineers from companies like Google, Tesla, and Palantir on complex issues Tackle meaningful problems that push the boundaries of what’s possible in data infrastructure and AI Outcome-driven culture : Low ego, high trust, customer-obsessed. We scaled to multimillion-dollar ARR without a dedicated sales team—just product pull and ROI. Generous benefits : Unlimited PTO, flexible hybrid setup, competitive compensation, full health coverage Backed by top-tier VCs with strong runway and bold ambitions. Benefits: Highly competitive compensation with uncapped commissions and meaningful equity Immigration sponsorship and counseling Premium health, dental, and vision coverage Flexible remote work and unlimited PTO Quarterly recharge days and annual team off-sites Budget for learning, development, and conferences Granica celebrates diversity and is committed to creating an inclusive environment for all employees. We do not discriminate on the basis of race, religion, color, gender expression or identity, sexual orientation, national origin, citizenship, age, marital status, veteran status, disability status, or any other characteristic protected by law.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Immediate Openings for Java Developers | 1-Day Interview Drive | Hyderabad | Chennai We’re on the hunt for passionate, experienced Java Developers ready to take on exciting challenges at a leading global MNC . If you're looking for high-impact work, cutting-edge technology, and a team that values innovation — this is your opportunity. Walk-In Drive Details: Date: Saturday, 26th July 2025 Mode: Face-to-Face ONLY Experience Required: 5 to 9 Years Openings: Multiple Vacancies Tech Skills We’re Looking For: Strong hands-on experience in Java & Spring Boot Solid understanding of RDBMS Familiarity with DevOps tools and CI/CD pipelines Exposure to JBoss/WebLogic, Apache/Nginx, Redis, Coherence Experience with Junit, Mockito , and secure application development Knowledge of Spring Security (Groovy is a plus) Ideal Candidates: 5–9 years of relevant experience Able to join immediately or within 30 days Strong problem-solving & design skills Comfortable working in fast-paced environments Why You Should Attend: Fast-track hiring – interview & offer on the same day Opportunity to work on enterprise-level, modern projects Join one of the world’s top MNCs with global career growth Collaborative, tech-forward work culture 📩 How to Apply Quickly: 👉 Comment “Interested” below 👉 DM me directly with your resume 👉 Or email your CV to [shalini.v@saranshinc.com] with subject: “Java Walk-in Drive – 26th July – [Your Name]” Don’t miss this chance to level up your career. Share this post or tag someone who might be a great fit! #JavaDevelopers #JavaHiring #WalkInDrive #SpringBoot #DevOpsJobs #TopMNC #ImmediateJoiners #BackendJobs #TechCareers #HiringAlert #JavaCareers #SoftwareJobs #CareerGrowth #OnsiteRoles #InterviewDrive #JavaJobs2025

Posted 1 week ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Senior Back End Full Stack Developer (Java, Angular 2+) Experience-6 -10 yrs Mandatory Skill Set : Core Java, Spring Boot, Angular 2+, PostgreSQL/SQL/MSSQL, Hibernate/JPA · Designed, developed and supported complicated Software Applications for at least 5 years · Proficient in programming with Java, JavaScript, & SQL · Experienced with MVC architecture, Spring framework, Spring Boot framework, Apache, Tomcat, Hibernate/JPA, & Junit, Angular 14+ · Good problem-solving skills & quick learner for new concepts & new technologies · Good knowledge of HTML, CSS · Excellent verbal and written communication skills

Posted 1 week ago

Apply

14.0 years

20 - 50 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Technical Lead – HPC (High Performance Computing) Location: Chennai, India Domain: Information Technology (IT) Experience Required: 7–14 Years Salary Range: INR 20,00,000 – 50,00,000 Notice Period: Immediate to 60 days Key Responsibilities Design, implement, and support high-performance computing (HPC) clusters. Deep understanding of HPC systems, including CPU/GPU architecture, scalable storage, high-bandwidth interconnects, and cloud-based architectures. Generate hardware BOMs for HPC clusters, manage vendor relations, and oversee hardware release activities. Configure Linux OS for HPC systems. Interpret project specifications and performance requirements at both subsystem and system levels. Ensure timely project deliveries aligned with program goals. Support the release of new products to manufacturing and end customers with high-quality documentation, scripts, and golden images. Required Qualifications Minimum 7 years of experience in: HPC systems and clusters Linux systems (SuSE, RedHat, Rocky, Ubuntu) HPC hardware: servers, GPUs, networking, storage, BIOS & BMC TCP/IP fundamentals and protocols (DNS, DHCP, HTTP, LDAP, SMTP) Scripting experience in Shell and Python Familiarity with configuration management tools (e.g., Salt, Chef, Puppet) Experience with storage setup and maintenance Hands-on with SystemD, PXE booting, and Linux HA Preferred Qualifications Exposure to DevOps tools: Jenkins, Git-based repositories, Docker, Singularity Working knowledge of Kubernetes, Prometheus, Grafana Experience with Apache/Nginx, HAProxy, load balancing, and application routing Degree in Computer Engineering or Electrical Engineering Educational Qualification: BE/BTech, MCA, MSc, or MS (Mandatory) Candidates with Diploma or 3-year degrees (BCA/BSc) will not be considered. Skills: bios,python scripting,shell scripting,grafana,nginx,hpc hardware,hpc systems,kubernetes,bandwidth,boms,apache,tcp/ip fundamentals,prometheus,application,haproxy,pxe booting,linux ha,computing,configuration management tools,devops tools,linux,architecture,storage setup,linux systems,systemd

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

At Ververica, we are the original creators and core contributors to Apache Flink, the leading open-source stream and batch processing engine. Flink powers mission-critical applications at companies like **Netflix, Alibaba, Amazon, and Uber**, as well as banks, telcos, and global enterprises. We are passionate about open source and equally committed to delivering enterprise-grade data processing solutions that help organizations unlock real-time insights at scale. Backed by one of the world's largest tech companies, Ververica offers the best of both worlds: innovation with stability. As a Technical Account Manager (TAM) at Ververica, you'll play a central role in our customers' success. You'll act as a trusted advisor and hands-on technical partner, helping organizations adopt and scale their real-time data processing architectures using Ververica Unified Streaming Data Platform, which is built on top of Apache Flink. You'll work closely with engineers, data teams, and platform owners across industries to solve complex challenges — from system design and deployment to performance tuning and long-term operations. What You'll Do Own the technical relationship with a portfolio of enterprise customers — from onboarding through expansion. Guide customers in architecting and optimizing their stream processing infrastructure with Ververica Unified Streaming Data Platform. Advise on best practices for Flink job design, deployment models (on-prem Kubernetes, Cloud provider, Managed Flink, BYOC), and monitoring strategies. Provide in-depth technical troubleshooting and performance tuning across data pipelines, connectors, and clusters. Collaborate cross-functionally with our Engineering, Product, Sales, and Product Marketing teams to advocate for customer needs. Assist with incident triage, root cause analysis, and building out operational playbooks for scalable Flink deployments. Support migrations from legacy or batch-based systems to real-time processing. Conduct regular check-in calls and Quarterly Business Reviews (QBRs) with strategic customers to align on goals, feedback, and roadmap priorities. Participate in on-call support rotations. Represent Ververica in customer meetings, roadmap sessions, and (if you're interested) at conferences and meetups. Continuously sharpen your Flink expertise and stay ahead of trends in distributed systems, observability, and cloud infrastructure. Requirements Must-Have Skills Familiarity with cloud platforms such as AWS, GCP, or Azure and container orchestration with Kubernetes — our product runs on Kubernetes, so hands-on Kubernetes experience is essential. Ability to read and reason about Java code, with working knowledge of Java 11+ features and a basic understanding of the JVM. Working knowledge of SQL, including querying and reading streaming data. Excellent problem-solving skills, with a proven ability to diagnose and resolve complex technical issues Familiarity with streaming technologies such as Apache Flink, Apache Kafka, or Redpanda. Excellent communication skills — you're comfortable speaking with both technical engineers and non-technical stakeholders Nice-to-Have Skills Working knowledge of Python, especially useful when supporting customers using PyFlink. Experience with observability and monitoring tools like Grafana, Datadog, DynaTrace, or similar platforms. Familiarity with DevOps workflows or CI/CD tooling in cloud-native environments. Experience running Apache Flink in production or contributing to open-source data infrastructure projects Key Attributes for Success 3+ years in a customer-facing technical role such as TAM, Solutions Engineer, or Support Engineer. Strong ability to build and maintain relationships with enterprise customers, acting as a trusted technical advisor. A genuine sense of empathy and ownership in helping customers resolve technical issues and achieve success. Ability to understand customer use cases, map them to Ververica capabilities, and guide customers in designing effective solutions. Strong sense of accountability, autonomy, and a desire to make a measurable impact in customer outcomes Ability to manage multiple client accounts and projects simultaneously, prioritizing tasks and meeting deadlines Benefits What We Offer Competitive compensation: salary, equity, performance bonus Remote-first work culture with flexible hours Access to Flink core developers and a global open-source ecosystem Generous vacation, personal leave, and wellness benefits Offices in select cities for in-person collaboration when desired At Ververica, we believe innovation starts with diverse perspectives. We welcome applicants from all backgrounds and life experiences — if you're passionate about helping customers succeed with cutting-edge technology, we'd love to hear from you.

Posted 1 week ago

Apply

9.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. 💼 JD – Windows Engineer (L3) | 7–9 Years Experience Location: Noida NSEZ | Work Mode: 24×7 Environment | Shift: Rotational | Position Type: Full-Time 🧩 About the Role: We are looking for a seasoned Windows Engineer (L3 level) with 7–9 years of hands-on experience in Windows Server administration, web hosting, mail systems, and database environments. The ideal candidate should have strong analytical skills, the ability to manage complex environments, and the capability to lead escalations, mentor juniors, and ensure seamless infrastructure operations. 🔧 Key Responsibilities: Web Hosting & Web Server Administration: Advanced configuration and management of IIS, Apache, Nginx, Plesk, and cPanel Fixing critical security vulnerabilities (e.g., XSS, host-header injection, weak encryption) End-to-end management of SSL, caching, extensions, and performance tuning Migration, upgrade, DR planning, and optimization of websites and apps High-availability configuration including load balancing and backup/restore Mail Servers (Windows Based): Deep expertise in Microsoft Exchange, SmarterMail, and MailEnable Handling large-scale mail environments, security hardening, and patching DR planning, troubleshooting, and high-availability mail setup Database Administration (Windows): Advanced management of Microsoft SQL Server, MySQL, PostgreSQL Troubleshooting performance issues, replication, backup and recovery Patch management and optimization for DB infrastructure General & Leadership: Deep understanding of Windows Server OS (2012, 2016, 2019), AD, DNS, DHCP Strong networking and security fundamentals Mentoring L1/L2 teams and leading critical incident resolution Comfortable with a 24x7 support environment ✅ Requirements: 7 to 9 years of relevant experience Strong communication and escalation management skills Ability to independently manage and secure production environments Willingness to work rotational shifts and on-call 💼 JD – Windows Engineer (L2) | 2–3 Years Experience Location: Noida NSEZ | Work Mode: 24×7 Environment | Shift: Rotational | Position Type: Full-Time 🧩 About the Role: We are seeking a proactive Windows Engineer (L2 level) with 2–3 years of hands-on experience supporting Windows infrastructure, web hosting, email services, and database servers. The role involves operational support, first-line remediation, patching, monitoring, and responding to incidents. 🔧 Key Responsibilities: Web Hosting & Web Server Support: Setup and basic troubleshooting of IIS, Apache, Nginx, Plesk, and cPanel SSL installation, minor bug fixing, and extension support Participate in patching, backups, and basic DR setup Address basic security issues and coordinate escalations Mail Server Administration: Configuration and maintenance of Microsoft Exchange, MailEnable, and SmarterMail Monitor mail server health, perform patching, and backup activities Work with L3 team on escalated mail performance/security issues Database Support: Support and monitor Microsoft SQL, MySQL, PostgreSQL on Windows servers Handle patching, backups, and DR activities for databases Troubleshoot connectivity and basic performance issues General: Install, maintain, and monitor Windows Server OS Strong fundamentals in networking and system security Coordinate with internal users and handle support tickets effectively Willing to work in a 24×7 rotational environment ✅ Requirements: 2 to 3 years of relevant experience in a similar role Good communication and documentation skills Ability to follow escalation matrix and work under pressure Willing to work night/weekend shifts if needed Share your CV on udisha.parashar@cyfuture.com

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Data Engineer Remote 7 Months Contract + Extendable Experience: 6 Years We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5.Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies