Home
Jobs

4657 Apache Jobs - Page 41

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 5.0 years

0 Lacs

Taliparamba, Kerala

On-site

Indeed logo

HAZERCLOUD™ is a DevOps as a Service company that delivers robust Cloud solutions with a focus on automation and simplifying web application development processes. Our expert team of DevOps engineers enables businesses and developers to focus on delivering what matters without being held back by technology. Role Description This is a full-time on-site position for a DevOps Specialist located in Kannur, Kerala. The DevOps Engineer will be in charge of implementing, automating, and maintaining web application deployments on various platforms such as AWS. Additionally, the DevOps Engineer will assist with CI/CD automation and scripting. Subject Matter Expert in administering and deploying CICD tools such as Git, Jenkins, AWS Code Pipeline, etc. Expertise in Deploying Python Django, Node, and PHP Applications. Expertise in Linux System Administration. Specialized in Containerisation, EKS, and Kubernetes. Experience in CI/CD pipeline design and implementation for Python, Node, and PHP applications. Working experience on Apache, Nginx, and MySQL. Expertise in troubleshooting and resolving issues in dev, test, and production environments. Knowledge of databases including MySQL, Mongo Elastic-Search, and DB Cluster. Jenkins Automation server, with plugins built for developing CI/ CD pipelines. Hands-on Experience in AWS environment (EC2, RDS, S3, EBS, ALB, Route 53, VPC, IAM, etc). Solid understanding of DNS, CDN, SSL, and WAF. Infrastructure as code (IaC) skills. Critical thinking, time-management skills, and problem-solving skills. Experience with Software Development process and Continuous Integration tools. Excellent verbal and written communication skills. Ability to work well in a team environment with limited guidance. Qualifications Minimum 1-5 Years Of DevOps, Linux System Admin Experience. AWS Certified Solution Architect / SysOps / CloudOps Associate level Certification is mandatory. RHCSA / RHCE Certification will be a plus Btech / Diploma / Bachelor's degree in Computer Science or related field. Job Type: Fresher Pay: ₹30,000.00 - ₹100,000.00 per month Schedule: Day shift UK shift US shift Work Location: In person Speak with the employer +91 9207670011

Posted 6 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting. The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently. This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability. Key Responsibilities: ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses. Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes. Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues. File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data. Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency. Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health. Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools. Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes. Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data. Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs. Skills Required: Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes. Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning. Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies. Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e.g., Apache Airflow, Luigi, or Prefect). SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses. Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems. Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies. Nice to Have: Experience with cloud data warehouses (e.g., Snowflake, Redshift, BigQuery). Knowledge of message queues (Kafka, RabbitMQ) for data streaming. Familiarity with containerization tools (Docker, Kubernetes) for deployment. Exposure to infrastructure automation tools (Terraform, Ansible). Qualifications: Bachelor’s degree in Computer Science, Data Engineering, or a related field. 4+ years of experience in ETL development, data pipeline management, or backend data engineering. Strong analytical mindset and ability to handle large-scale data processing efficiently. Ability to work independently in a fast-paced, product-driven environment. Show more Show less

Posted 6 days ago

Apply

9.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS. TCS is Hiring For AI Engineer Experience: 9-14 years Relevant Experience: 9-14 years WORK Location: PAN India Job Description- Must Have 9- 14 years of IT experience Strong programming skills in Python Expertise in machine learning is a must. In depth knowledge of various machine learning models and techniques; deep learning, supervised and unsupervised learning, natural language processing, and reinforcement learning. Expertise in data analysis and visualization to extract insights from large datasets and transmit them virtually. Good knowledge on data mining, statistical methods, data wrangling, and visualization tools like Power BI, Tableau and matplotlib. Hands on skills in Data Manipulation Language. Expertise in various machine learning frameworks - TensorFlow, Scikit-Learn and PyTorch. Good to Have - Gen AI Certification Experience in Containers (Docker), Kubernetes, Kafka (or other messaging platform), Apache Camel, RabbitMQ, Active MQ, Storage / RDBMS and No-SQL databases etc.. Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Position Overview We seek a talented Frontend First Full-Stack Engineer with 6+ years of experience to join our dynamic software consulting team. As a Full-Stack Engineer, you will be responsible for developing and implementing high-quality software solutions for our clients, working on both front-end and back-end aspects of projects. Primary Skill Sets Frontend: React.js, TypeScript, Tan Stack Query, React Render Flow, Next.js Backend: Node.js, Express.js, Nest JS, Sequelize ORM , Server-Sent Events (SSE), WebSocket’s, Event Emitter. Stylesheets : MUI, Tailwind Secondary Skill Sets Messaging Systems: Apache Kafka, AWS SQS, RabbitMQ Containerization & Orchestration: Docker, Kubernetes (Bonus) Databases & Caching: Redis, Elasticsearch, MySQL, PostgreSQL Bonus Experience - Proven experience in building Agentic UX to enable intelligent, goal-driven user interactions. - Hands-on expertise in designing and implementing complex workflow management systems. - Developed Business Process Management (BPM) platforms and dynamic application builders for configurable enterprise solutions. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Simpleenergy Simpleenergy specializes in the manufacture of smart electric two-wheelers. We are a team of 300+ engineers coming together to make smart, supercharging, and affordable two-wheelers. The company was founded in 2019 and is based in Bangalore, India. Our mission is to build the future of mobility that is electric and connected. We at Simple energy are working towards accelerating by making them more accessible, affordable, secure and comfortable and we embrace the responsibility to lead the change that will make our world better, safer and more equitable for all. Job description: Data Engineer Location: Yelahanka, Bangalore About The Gig We’re on the lookout for a Data Engineer who loves building scalable data pipelines and can dance with Kafka and Flink like they’re on their playlist. If Spark is your old buddy, even better—but it’s not a deal-breaker. What You’ll Do Design, build, and maintain real-time and batch data pipelines using Apache Kafka and Apache Flink. Ensure high-throughput, low-latency, and fault-tolerant data ingestion for telemetry, analytics, and system monitoring. Work closely with backend and product teams to define event contracts and data models. Maintain schema consistency and versioning across high-volume event streams. Optimize Flink jobs for memory, throughput, and latency. If you know a little Spark, help out with batch processing and offline analytics too (we won’t complain) Ensure data quality, lineage, and observability for everything that flows through your pipelines. What You Bring 3+ years of experience as a data/backend engineer working with real-time or streaming systems. Hands-on experience with Kafka (topics, partitions, consumers, etc.). Experience writing production-grade Flink jobs (DataStream API preferred). Good fundamentals in distributed systems, partitioning strategies, and stateful processing. Comfortable with any one programming language – Java, Scala, or Python. Basic working knowledge of Spark is a plus (optional, but nice to have). Comfortable working in a cloud-native environment (GCP or AWS). 🎁 Bonus Points Experience with Protobuf/Avro schemas and schema registry. Exposure to time-series data (we live and breathe CAN signals). Interest in vehicle data, IoT, or edge computing. Why Simple Energy? You’ll build pipelines that move billions of records a day from electric vehicles across India. You’ll be part of a lean, fast-moving team where decisions happen fast and learning is constant. Your code will directly impact how we track, monitor, and improve our vehicles on the road. Zero fluff. Full impact. Skills: scala,cloud-native environments,time-series data,data quality,java,avro,batch data pipelines,pipelines,apache flink,data ingestion,flink,kafka,data lineage,distributed systems,gcp,python,real-time data pipelines,aws,data,protobuf,apache kafka Show more Show less

Posted 6 days ago

Apply

4.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Jakarta, Indonesia

On-site

Linkedin logo

Company Description Quantyc.ai is a Software and IT Consultancy provider with offices in Mumbai and Jakarta, specializing in Analytics, Predictive, and Data-backed decision support solutions across multiple industries. The company implements world-leading products in Private Wealth, Lending, and Digital Banking with an emphasis on global best practices and state-of-the-art technology. Role Description Design and develop JMeter test scripts for load, stress, and endurance testing. Simulate real-world user behavior for Wealth Management applications (e.g., portfolio dashboards, transaction systems) are a plus. Integrate JMeter with CI/CD pipelines (e.g., Jenkins, GitLab). Analyze test results and identify performance bottlenecks. Collaborate with developers, DevOps, and infrastructure teams to optimize system performance. Generate detailed performance reports and dashboards. Required Skills Expertise in Apache JMeter: scripting, parameterization, correlation, assertions, and listeners. Experience with distributed load testing execution. Experience in Banking is a must Familiarity with Wealth Management platforms and financial transaction flows. Strong knowledge of HTTP/S, REST APIs, JSON/XML, and SQL. Experience with monitoring tools (e.g., Dynatrace, Elastic). Qualifications Bachelor’s degree in Computer Science, Engineering, or related field. o 2–5+ years of experience in performance testing, with at least 2 years using JMeter. ISTQB or performance testing certifications are a plus. Show more Show less

Posted 6 days ago

Apply

4.0 - 7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role Expectations: Design, develop, and execute automated tests to ensure product quality in digital transformation initiatives. Collaborate with developers and business stakeholders to understand project requirements and define test strategies. Implement API testing using Mockito, Wiremock, and Stubs for effective validation of integrations. Utilize Kafka and MQ to test and monitor real-time data streaming scenarios. Perform automation testing using RestAssured, Selenium, and TestNG to ensure smooth delivery of applications. Leverage Splunk and AppDynamics for real-time monitoring, identifying bottlenecks, and diagnosing application issues. Create and maintain continuous integration/continuous deployment (CI/CD) pipelines using Gradle and Docker. Conduct performance testing using tools like Gatling and Jmeter to evaluate application performance and scalability. Participate in Test Management and Defect Management processes to track progress and issues effectively. Work closely with onshore teams and provide insights to enhance test coverage and overall quality. Qualifications: 4-7 years of relevant experience in QA automation and Java . Programming: Strong experience with Java 8 and above, including a deep understanding of the Streams API . Frameworks: Proficiency in SpringBoot and JUnit for developing and testing robust applications. API Testing: Advanced knowledge of RestAssured and Selenium for API and UI automation. Candidates must demonstrate hands-on expertise. CI/CD Tools: Solid understanding of Jenkins for continuous integration and deployment. Cloud Platforms: Working knowledge of AWS for cloud testing and deployment. Monitoring Tools: Familiarity with Splunk and AppDynamics for performance monitoring and troubleshooting. Defect Management: Practical experience with test management tools and defect tracking. Build & Deployment: Experience with Gradle for build automation and Docker for application containerization. SQL: Strong proficiency in SQL, including query writing and database operations for validating test results. Domain Knowledge: Prior experience in the Payments domain with a good understanding of the domain-specific workflows. Nice to Have: Data Streaming Tools: experience with Kafka (including basic queries and architecture) OR MQ for data streaming testing. Financial services or payments domain experience will be preferred. Frameworks: Experience with Apache Camel for message-based application integration. Performance Testing: Experience with Gatling and Jmeter for conducting load and performance testing. Show more Show less

Posted 6 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 10+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

About Praella: We are a proud Great Place to Work certified organization. We strive for excellence, and we chase perfection for our merchants and team. We build relationships with our merchants that are not reflective of a vendor-like or even a partner-like relationship. We strive to become an extension of who our merchants are. And we strive to become a reflection of our team as an organization. We are also a Webby-winning agency. We are a Shopify Plus partner. We are grateful to be an extension of some of the best e-commerce brands. We are a merchant-first, results-driven team. We have the nothing is impossible mentality. We work together and support each other and our clients. Collaboration and camaraderie are everything. We are data-driven, ambitious, and creative - we work hard, and we work smart. - Our founders started one of the first Shopify Plus agencies, which was eventually sold. - We are Shopify Plus Partners and partner with other e-commerce leaders like ReCharge, Klaviyo, Omnisend, Yotpo, Smile, etc. - We have a remote team, but our headquarters is in Chicago. We have a small team in Chicago. Outside of Chicago, we have teams located in Atlanta, Los Angeles, Phoenix, Toronto, Athens (Greece), Sarajevo (Bosnia), and Surat (India). - Do you want to work from Europe or India for a month and travel to nearby destinations on long weekends? Why not? - The majority of our clients are e-commerce-based merchants with annual revenue between $2M-$350MM. We are ambitious. And, we want you to be too. We need people that want to be pushed and who want to be challenged. We want people who will push us and who will challenge us. Is that you? Our Website: https://praella.com/ Job Description of Full Stack Developer Praella is seeking skilled Full Stack Developers to join our dynamic team, driving innovation and setting new standards for user experiences. We value a unique blend of technical expertise, insatiable curiosity, and a methodical, analytical mindset in our ideal candidates. Objectives of this Role: Regularly communicate progress on the long-term technology roadmap with stakeholders, project managers, quality assurance teams, and fellow developers. Create and maintain workflows to ensure workload balance and consistent visual designs. Develop and oversee testing schedules in the client-server environment, optimizing content display across various devices. Produce high-quality, test-driven, and modular code, setting a benchmark for the entire team. Recommend system solutions by evaluating custom development and purchase alternatives. About the Role: Write clean, secure, and modular PHP and Node.js code, with a focus on object-oriented programming, security, refactoring, and design patterns. Leverage expertise in the Laravel framework, building factories, facades, and libraries using abstract classes, interfaces, and traits. Conduct unit testing using frameworks like PHPUnit/phpspec. Demonstrate proficiency in RDBMS (MySQL/PostgreSQL), NoSQL databases (MongoDB/DynamoDB), and query optimization techniques. Utilize core knowledge of HTML5, CSS3, jQuery, and Bootstrap. Familiarity with JavaScript Frameworks (ReactJS/VueJS) is advantageous. Develop RESTful APIs, including Auth2.0 implementation for authentication and authorization. Experience in microservices development is a plus. Proficient in Git, with a clear understanding of Git workflows, BitBucket, and CI/CD processes. Familiarity with cloud servers (Heroku/Digital Ocean), Docker/Homestead, and server administration (Apache/Nginx, php-fpm). Create composer packages and work with webpack, gulp.js, Babel for browser support. Strong problem-solving and analytical skills. Excellent written and verbal communication skills in English. Additional Skills: Proficiency in Node.js. Experience with the Shopify E-commerce platform would be a valuable additional skill set. Qualifications: Demonstrable experience with PHP, Laravel, Node.js, and relevant frameworks. Experience with RDBMS and NoSQL databases. Proficiency in front-end technologies such as HTML5, CSS3, jQuery, and Bootstrap. Strong understanding of RESTful API design and development. Working knowledge of Git, CI/CD processes, and cloud servers. Familiarity with Docker, composer packages, and build tools. What you can bring to the table: Join Praella and be a part of a team shaping the future of user experiences. Your expertise will play a key role in our continued success and client satisfaction. Passion for learning and adapting to new technologies. Strong problem-solving skills and analytical mindset. Excellent written and verbal communication skills in English. Experience: 5+ Years of relevant industry experience. Education: B.E/B.Tech/B.Sc [(C.S.E)/I.T], M.C.A, M.Sc (I.T) Life At Praella Private Limited Benefits and Perks 5 days working Fully Paid Basic Life/ Competitive salary Vibrant Workplace PTO/Paid Offs/Annual Paid Leaves/Paternal Leaves Fully Paid Health Insurance. Quarterly Incentives Rewards & Recognitions Team Outings Our Cultural Attributes Growth mindset People come first Customer obsessed Diverse & inclusive Exceptional quality Push the envelope Learn and grow Equal opportunity to grow. Ownership Transparency Team Work. Together, we can…!!!!! Show more Show less

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Work experience: 3-6 years Budget is 7 Lac Max Notice period: Immediate to 30days. Linux Ø Install, configure, and maintain Linux servers (Red Hat, CentOS, Ubuntu, Amazon Linux). Ø Linux OS through Network and Kick Start Installation Ø Manage system updates, patch management, kernel upgrades. Ø Create and manage user accounts, file systems, permissions, and storage. Ø Write shell scripts (Bash, Python) for task automation. Ø Monitor server performance and troubleshoot hardware/software issues. Ø Handle incident management, root cause analysis, and preventive maintenance. Ø Implement and manage backup solutions (rsync, cron jobs, snapshot backups). Ø Harden servers by configuring firewalls (iptables, firewalld), securing SSH, and managing SELinux. Ø Configure and troubleshoot networking services (DNS, DHCP, FTP, HTTP, NFS, Samba). Ø Work on virtualization and cloud technologies (AWS EC2, VPC, S3, RDS basics if required). Ø Maintain detailed documentation of system configuration and procedures. Ø Implement and configure APACHE & Tomcat web server with open SSL on Linux. Ø SWAP Space Management. Ø LVM (extending, reducing, removing and merging), Backup and Restoration. Amazon Web Services Ø AWS Infrastructure Management : Provision and manage cloud resources like EC2, S3, RDS, VPC, IAM, EKS, Lambda. Ø Cloud Architecture : Design and implement secure, scalable, and reliable cloud solutions. Ø Automation and IaC : Automate deployments using tools like Terraform, CloudFormation, or AWS CDK. Ø Security Management : Configure IAM roles, security groups, encryption (KMS), and enforce best security practices. Ø Monitoring and Optimization : Monitor cloud resources with CloudWatch, X-Ray, and optimize for cost and performance. Ø Backup and Disaster Recovery : Set up data backups (S3, Glacier, EBS snapshots) and design DR strategies. Ø CI/CD Implementation : Build and maintain CI/CD pipelines using AWS services (CodePipeline, CodeBuild) or Jenkins, GitLab,GitHub. Ø Networking : Manage VPCs, Subnets, Internet Gateways, NAT, VPNs, Route53 DNS configurations. Ø Troubleshooting and Support : Identify and fix cloud resource issues, perform root cause analysis. Ø Migration Projects : Migrate on-premises servers, databases, and applications to AWS. Windows Server and Azure: Ø Active Directory: Implementation, Migration, Managing and troubleshooting. Ø Deep knowledge on DHCP Server Ø Deep knowledge in Patch management Ø Troubleshooting Windows operating System Ø Decent knowledge in Azure (Creation of VMs, configuring network rules, Migration, Managing and troubleshooting) Ø Deep knowledge in VMware ESXi (Upgrading the server firmware, creation of VMs, Managing backups, monitoring etc) Networking: Ø Knowledge on IP Addressing, NAT, P2P protocols, SSL and IPsec VPNS etc Ø Deep knowledge in VPN Ø Knowledge in MVoIP, VMs, SIP PRI and Lease Line. Ø Monitoring the Network bandwidth and maintaining the stability Ø Configuring Switch and Routers Ø Troubleshooting Network Devices Ø Must be able to work on Cisco Meraki Access Point devices Firewall & Endpoint Security: Ø Decent knowledge in Fortinet Firewalls which includes creating Objects, Routing, creating Rules and monitoring etc. Ø Decent knowledge in CrowdStrike Ø Knowledge in Vulnerability and assessment Office365 Ø Deep knowledge in Office365 (Creation of mail, Backup and archive, Security rules, Security Filters, Creation of Distribution list etc) Ø Knowledge in MX, TX and other records Ø Deep knowledge in Office365 Apps like Teams, Outlook, Excel etc Ø SharePoint management Other Tasks: Ø Hardware Servicing Laptops and desktops Ø Maintaining Asset inventory up to date. Ø Managing the utility invoices. Ø Handling L1 and L2 troubleshooting Ø Vendor Management Ø Handling application related issues Ø Website hosting and monitoring Ø Tracking all Software licenses, Cloud Service renewal period and ensue they are renewed on time. Ø Monitoring, managing and troubleshooting servers. Ø Knowledge in NAS Ø Knowledge in EndPoint Central tool and Ticketing tool. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

We’re Hiring: MLOps Engineer (Azure) harshita.panchariya@tecblic.com Location: Ahmedabad, Gujarat Experience: 3–5 Years Employment Type : Full-Time * An immediate joiner will be preferred. Job Summary: We are seeking a skilled and proactive MLOps/DataOps Engineer with strong experience in the Azure ecosystem to join our team. You will be responsible for streamlining and automating machine learning and data pipelines, supporting scalable deployment of AI/ML models, and ensuring robust monitoring, governance, and CI/CD practices across the data and ML lifecycle. Key Responsibilities MLOps : Design and implement CI/CD pipelines for machine learning workflows using Azure DevOps, GitHub Actions, or Jenkins. Automate model training, validation, deployment, and monitoring using tools such as Azure ML, MLflow, or KubeFlow. Manage model versioning, performance tracking, and rollback strategies. Integrate machine learning models with APIs or web services using Azure Functions, Azure Kubernetes Service (AKS), or Azure App Services. DataOps Design, build, and maintain scalable data ingestion, transformation, and orchestration pipelines using Azure Data Factory, Synapse Pipelines, or Apache Airflow. Ensure data quality, lineage, and governance using Azure Purview or other metadata management tools. Monitor and optimize data workflows for performance and cost efficiency. Support batch and real-time data processing using Azure Stream Analytics, Event Hubs, Databricks, or Kafka. DevOps & Infrastructure Provision and manage infrastructure using Infrastructure-as-Code tools such as Terraform, ARM Templates, or Bicep. Set up and manage compute environments (VMs, AKS, AML Compute), storage (Blob, Data Lake Gen2), and networking in Azure. Implement observability using Azure Monitor, Log Analytics, Application Insights, and Skills : Strong hands-on experience with Azure Machine Learning, Azure Data Factory, Azure DevOps, and Azure Storage solutions. Proficiency in Python, Bash, and scripting for automation. Experience with Docker, Kubernetes, and containerized deployments in Azure. Good understanding of CI/CD principles, testing strategies, and ML lifecycle management. Familiarity with monitoring, logging, and alerting in cloud environments. Knowledge of data modeling, data warehousing, and SQL. Preferred Qualifications Azure Certifications (e.g., Azure Data Engineer Associate, Azure AI Engineer Associate, or Azure DevOps Engineer Expert). Experience with Databricks, Delta Lake, or Apache Spark on Azure. Exposure to security best practices in ML and data environments (e.g., identity management, network security). Soft Skills Strong problem-solving and communication skills. Ability to work independently and collaboratively with data scientists, ML engineers, and platform teams. Passion for automation, optimization, and driving operational excellence. harshita.panchariya@tecblic.com Show more Show less

Posted 6 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Position : Lead Java Fullstack Developer (Banking) Experience : 6 + Years Location : Remote /Kochi/Chennai/Pune Notice Period : Immediate Joiner We are looking for a highly skilled and proactive Full Stack Developer (FSE) with expertise in Java and Spring Boot to join our innovative and collaborative team. You will play a key role in the development of critical banking applications across various business domains. The ideal candidate will be self-driven, analytically strong, and thrive in a fast-paced environment. Key Responsibilities Design, develop, and maintain business-critical banking applications. Implement new features using full-stack technologies, focusing on Java, Spring Boot, and REST APIs. Ensure high code quality through thorough unit testing, peer reviews, and adherence to coding best practices. Work closely with UI/UX designers, backend developers, and other stakeholders to deliver seamless and performant applications. Identify and resolve performance bottlenecks, bugs, and technical issues. Participate actively in Agile/Scrum development cycles and contribute to continuous improvement processes. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 4–6 years of hands-on experience in application development. Technical Skills Strong proficiency in Java and Spring Boot. Experience with RESTful API development. Proficiency in Kubernetes / OpenShift. Familiarity with DevOps for CI/CD pipeline management. Experience with JMS and Message Queues. Nice to Have Knowledge of Quarkus and Apache Camel. Understanding of Core Banking Systems. Prior experience in the banking domain. Experience in customer-facing roles. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3006789 Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Job: About Client Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title : AWS Data Engineer Key Skills : AWS, Data Engineer, Python , ETL, Snowflake, Apache Airflow. Locations : PAN INDIA Experience : 8- 10 years Education Qualification : Any Graduation Work Mode : Hybrid Employment Type : Contract to Hire Notice Period : Immediate - 10 Days. Job Description: 8 to 10 years of experience in data engineering roles with a focus on building scalable data solutions. Proficiency in Python for ETL, data manipulation, and scripting. Hands-on experience with Snowflake or equivalent cloud-based data warehouses. Strong knowledge of orchestration tools such as Apache Airflow or similar. Expertise in implementing and managing messaging queues like Kafka , AWS SQS , or similar. Demonstrated ability to build and optimize data pipelines at scale, processing terabytes of data. Experience in data modeling, data warehousing, and database design. Proficiency in working with cloud platforms like AWS, Azure, or GCP. Strong understanding of CI/CD pipelines for data engineering workflows. Experience working in an Agile development environment , collaborating with cross-functional teams. Show more Show less

Posted 6 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. SUMMARY Thales DIS - Digital Payment offers a simplified digital experience solution for proximity payment and e-commerce, allowing banks and businesses to offer a multi-channel and multi-device payment experience. This sophisticated solution that involves multiple web services and clients, offers a single platform that provides ready-to-market, easy-to-integrate services while ensuring best-in-class security. We are seeking a solution integrator with experienced and strong technical background in Cloud environment, who help in solution integration of our Digital Payment products and solutions. You will bring your engineering expertise and support experience to help deliver successfully projects within deadlines/costs, and work closely with project manager and functional teams (Sales and Product, and R&D teams). We are looking for someone to lead engineering aspect of the delivery team, continually help to deliver project, ensure team technical competency to match the ongoing, rapid changing technical environment. Responsibilities Responsible to integrate and deliver solutions with customers in the cloud according to the customer requirements and the project plan following best practices Provide visibility to Project Manager on partner/customer integration progress Provide technical guidance and participate in troubleshooting, validation of the whole solution until project completion and customer acceptance Support Project Manager on handover all the details of the project completion to Operation team Optimize solution and delivery processes Working with several teams across the organization and in a multicultural environment PREFFERED SKILLS AND EXPERIENCES Technical / Functional skills Degree in Computer Science/Electrical/Electronics/Computer Engineering related fields Minimum 5 years of experience in IT Strong knowledge and experience in AWS, SSL, certificates , key management , JSON, YAML files, Apache HTTP and other web servers (proxy, reverse proxy) Understanding of SOAP / REST API Proficiency working with Unix / Windows OS and on a Cloud environment Good knowledge of tools (e.g. Confluence, GIT, Jira, Splunk) Knowledge of database (Oracle and MongoDB) and ability to run SQL queries Knowledge in area of Python, shell scripting, smart cards, EMV, cryptography, security is a plus. Ability to take up a new technology and perform development based on client requirements. Analytical and problem-solving skills. Good at documentation (specification, user guide creation). Fluent in English Behavioral / Other skills Key team player Strong interpersonal and communication skills. Ability to work well as part of a team and direct cooperation with international customers Excellent organizational and time management skills Pro-active & decision making Rigor & stress resistant Ability to work across various organizations in the group Mobile for traveling on business trip at customer premises At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now!

Posted 6 days ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Software Development Engineer III A high-performing individual contributor who acts as a mentor to more junior engineers, applies new engineering principles to improve existing systems, and is responsible for leading complex, well-defined projects. You will join Order Management Service (OMS) , which is the core system at Expedia that supports both pre-booking and post-booking processes. It plays a critical role in multiple ongoing business and technology initiatives aimed at expanding our offerings. OMS leverages a diverse technology stack, including Elasticsearch, Kotlin, AWS Cloud, Spring Boot, Kafka, Apache, and more. With a system availability of 99.99% for Tier 0 and Tier 1 services, OMS is designed for high reliability and performance. What you'll do: Proactively teams up with peers across the organization to build an understanding of cross dependencies and shared problem-solving. Participates in a community of practice to share and gain knowledge. Continually seeks new technical skills in an engineering area. Share new skills and knowledge with the team to increase effectiveness. Demonstrates knowledge of advanced and relevant technology. Is comfortable working with several forms of technology. Understands the relationship between applications, databases, and technology platforms. Develops and tests complex or non-routine software applications and related programs and procedures to ensure they meet design requirements. Effectively applies knowledge of software design principles, data structures and/or design patterns, and computer science fundamentals to write code that is clean, maintainable, optimized, modular with good naming conventions. Effectively applies knowledge of databases and database design principles to solve data requirements. Effectively uses the understanding of software frameworks and how to leverage them to write simpler code. Leads/clarifies code evolution in code reviews. Brings together different stakeholders with varied perspectives to develop solutions to issues and contributes its own suggestions. Thinks holistically to identify opportunities around policies/ processes to increase efficiency across organizational boundaries. Assists with a whole systems approach to analyzing issues by ensuring all components (structure, people, process, and technology) are identified and accounted for. Identifies areas of inefficiency in code or systems operation and offers suggestions for improvements. Compiles and reports on major operational or technical initiatives (like RCAs) to larger groups, whether via written or oral means. Who you are: 5+ years for Bachelor's 3+ years for Master's Developed software in at least 3 different languages. Maintained/ran at least 4 software projects/products in production environments (bug fixing, troubleshooting, monitoring, etc.). Has strength in a couple of languages and/or one language with multiple technology implementations. Identifies strengths and weaknesses among languages for particular use cases. Creates API's to be consumed across the business unit. Selects among the technologies available to implement and solve the need. Understands how projects/teams interact with other teams. Understands and designs moderately complex systems. Tests and monitors code at the project level. Understands testing and monitoring tools. Debug applications. Tests, debugs, and fixes issues within established SLAs. Designs easily testable and observable software. Understands how team goals fit a business need. Identifies business problems at the project level and provides solutions. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age. India - Haryana - Gurgaon Technology Full-Time Regular 06/12/2025 ID # R-95983

Posted 6 days ago

Apply

0.0 years

0 Lacs

Pune, Maharashtra

On-site

Indeed logo

Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Innovation & Technology Job Number: WD30240368 Job Description Job Title: Software and Data Science Engineer Job Summary: We are seeking a highly motivated and analytical Data Scientist to join our team. The ideal candidate will possess a strong engineering background, a passion for solving technical problems, and the ability to work collaboratively with both technical and non-technical stakeholders. You will play a key role in developing and maintaining our platform, utilizing large-scale data to derive insights and drive business value, while working on both front-end and back-end components. What We Value: A highly analytical approach with an eagerness to solve technical problems using data structures, storage systems, cloud infrastructure, front-end frameworks, and other technical tools. Experience or curiosity about working with large-scale data to address valuable business challenges. The ability to collaborate efficiently within teams of diverse backgrounds, including technical and non-technical individuals. Comfort in a dynamic environment with evolving objectives, with a strong focus on iteration and user feedback. What We Require: A strong engineering background, preferably in Computer Science, Mathematics, Software Engineering, Physics, Data Science, or a related discipline. Proficiency in programming languages such as Python, Java, C++, TypeScript/JavaScript, or similar. Experience with both front-end and back-end development, including but not limited to: Front-end development with TypeScript and JavaScript frameworks. Back-end development using Apache Spark or similar technologies. Strong problem-solving skills and the ability to think critically about complex technical issues. Strong communication skills and the ability to explain technical concepts to non-technical audiences. Responsibilities: Design, build, and maintain scalable and reliable platform solutions, focusing on end-to-end data pipelines. Collaborate with cross-functional teams to identify and solve technical challenges across the stack. Utilize large-scale data to inform decision-making and enhance platform capabilities. Contribute to the development of best practices in software engineering and data management, with an emphasis on end-to-end data science workflows. Participate in code reviews and provide constructive feedback to peers. Continuously learn and stay updated on emerging technologies and industry trends.

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

SRE {Java + React + Devops} Hyderabad, India Information Technology 315970 Job Description About The Role: Grade Level (for internal use): 09 Job Description: We Are Seeking A Skilled And Motivated Application Operations Engineer For An SRE Role With Java, React JS And Spring Boot Skillset Along With Expertise In Data Bricks, Particularly With Oracle Integration, To Join Our Dynamic SRE Team. The Ideal Candidate Should Have 3 To 6 Years Of Experience In Supporting Robust Web Applications Using Java, React JS And Spring Boot With A Strong Background In Managing And Optimizing Data Workflows Leveraging Oracle Databases. The Incumbent Will Be Responsible For Supporting Applications, Troubleshooting Issues, Providing RCA’s And Suggestive Fixes By Managing Continuous Integration And Deployment Pipelines, Automating Processes, And Ensuring Systems Reliability, Maintainability And Stability. Responsibilities: The Incumbent Will Be Working In CI/CD, Handle Infrastructure Issues, Know How On Supporting Operations And Maintain User-Facing Features Using React JS, Spring Boot & Java Has Ability To Support Reusable Components And Front-End Libraries For Future Use Partner With Development Teams To Improve Services Through Rigorous Testing And Release Procedures. Has Willingness To Learn New Tools And Technologies As Per The Project Demand. Ensure The Technical Feasibility Of UI/UX Designs Optimize Applications For Maximum Speed And Scalability Collaborate With Other Team Members And Stakeholders Work Closely With Data Engineers To Ensure Smooth Data Flow And Integration. Create And Maintain Documentation For Data Processes And Workflows. Troubleshoot And Resolve Issues Related To Data Integrity And Performance. Good To Have Working Knowledge On Tomcat App Server And Apache Web Server, Oracle, Postgres Command On Linux & Unix. Self-Driven Individual Requirements : Bachelor’s Degree In Computer Science Engineering, Or A Related Field 3-6 Years Of Professional Experience Proficiency In Advanced Java, JavaScript, Including DOM Manipulation And The JavaScript Object Model Experience With Popular React JS Workflows (Such As Redux, MobX, Flux) Familiarity With RESTful APIs Experience With Cloud Platforms Such As AWS And Azure Knowledge Of CI/CD Pipelines And DevOps Practices Experience With Data Engineering Tools And Technologies, Particularly Data Bricks Proficiency In Oracle Database Technologies And SQL Queries Excellent Problem-Solving Skills And Attention To Detail Ability To Work Independently And As Part Of A Team Good Verbal And Written Communication Skills Familiarity With ITSM Processes Like Incident, Problem And Change Management Using ServiceNow (Preferable) Ability To Work In Shift Manner. Grade - 09 Location - Hyderabad Hybrid Mode - Twice A Week Work From Office Shift Time - 6:30 Am To 1 Pm OR 2 Pm To 10 Pm IST About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315970 Posted On: 2025-06-12 Location: Hyderabad, Telangana, India

Posted 6 days ago

Apply

0.0 - 6.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

About the Role: Grade Level (for internal use): 09 Job Description: We are seeking a skilled and motivated Application Operations Engineer for an SRE role with Java, React JS and Spring boot skillset along with expertise in Data Bricks, particularly with Oracle integration, to join our dynamic SRE team. The ideal candidate should have 3 to 6 years of experience in supporting robust web applications using Java, React JS and Spring boot with a strong background in managing and optimizing data workflows leveraging Oracle databases. The incumbent will be responsible for supporting applications, troubleshooting issues, providing RCA’s and suggestive fixes by managing continuous integration and deployment pipelines, automating processes, and ensuring systems reliability, maintainability and stability. Responsibilities: The incumbent will be working in CI/CD, handle Infrastructure issues, know how on supporting Operations and maintain user-facing features using React JS, Spring boot & Java Has ability to support reusable components and front-end libraries for future use Partner with development teams to improve services through rigorous testing and release procedures. Has willingness to learn new tools and technologies as per the project demand. Ensure the technical feasibility of UI/UX designs Optimize applications for maximum speed and scalability Collaborate with other team members and stakeholders Work closely with data engineers to ensure smooth data flow and integration. Create and maintain documentation for data processes and workflows. Troubleshoot and resolve issues related to data integrity and performance. Good to have working knowledge on Tomcat App server and Apache web server, Oracle, Postgres Command on Linux & Unix. Self-driven individual Requirements : Bachelor’s degree in computer science engineering, or a related field 3-6 years of professional experience Proficiency in Advanced Java, JavaScript, including DOM manipulation and the JavaScript object model Experience with popular React JS workflows (such as Redux, MobX, Flux) Familiarity with RESTful APIs Experience with cloud platforms such as AWS and Azure Knowledge of CI/CD pipelines and DevOps practices Experience with data engineering tools and technologies, particularly Data Bricks Proficiency in Oracle database technologies and SQL queries Excellent problem-solving skills and attention to detail Ability to work independently and as part of a team Good verbal and written communication skills Familiarity with ITSM processes like Incident, Problem and Change Management using ServiceNow (preferable) Ability to work in shift manner. Grade - 09 Location - Hyderabad Hybrid Mode - twice a week work from office Shift Time - 6:30 am to 1 pm OR 2 pm to 10 pm IST About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315970 Posted On: 2025-06-12 Location: Hyderabad, Telangana, India

Posted 6 days ago

Apply

0.0 - 9.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

General information Country India State Telangana City Hyderabad Job ID 43743 Department Infor Consulting Services Experience Level MID_SENIOR_LEVEL Employment Status FULL_TIME Workplace Type On-site Description & Requirements Senior Software Engineer 7-9 years of experience in Java development. Expertise in designing and implementing Microservices with Spring Boot. Extensive experience in applying design patterns, system design principles, and expertise in event-driven and domain-driven design methodologies. Extensive experience with multithreading, asynchronous and defensive programming. Proficiency in MongoDB, SQL databases, and S3 data storage. Experience with Kafka, Kubernetes, AWS services & AWS SDK. Hands-on experience with Apache Spark. Strong knowledge of Linux, Git, and Docker. Familiarity with Agile methodologies and tools like Jira and Confluence. Excellent communication and leadership skills. Bachelor’s degree in Computer Science or a related field. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.

Posted 6 days ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Chennai,Tamil Nadu,India Job ID 764206 Join our Team About this opportunity: Our Team, belonging to Software Pipeline & Support organization (SWPS), is looking for a Senior DevOps Engineer having strong technical leading capabilities and a genuine interest in Automation able to shape and drive new initiatives to maintain and develop the Ericsson Support Systems Verification Tool (aka ESSVT) Product Build pipelines. ESSVT is a Production Grade Cloud-Native Application used by Engineering and Services organization within and outside Ericsson premises. ESSVT coordinates the automated execution of both Functional and Non-Functional tests covering the entire Business & Operations Support Systems (BOS) product portfolio. ESSVT supports Product, Offering and Solution Testing improving testing lead times. ESSVT leverages on the most used Open-Source Testing technologies: Robot Framework and Apache JMeter. What you will do Analyze, Design and Develop new pipeline Features. Maintain the pipelines up’n’running. Mandatory skills Python (Advanced) Git/Gerrit (Advanced) Jenkins (Advanced) Docker (Advanced) Kubernetes (Average) Shell (Average) Jira (Average) Nice-to-have skills ADP’s bob GitLab Test Automation tools (e.g. Robot FW, Apache JMeter) AWS, OCI or any other Cloud Platform GitOps – Flux What´s in it for you? You will be part of a well-established, diverse and automation driven team spread among the world. Our mission is to make the life of the Test organization easier pushing a standardized test automation through the whole SW Lifecycle, from Development to Operations. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 6 days ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Location Bangalore, Karnataka, 560048 Category Engineering / Information Technology Job Type Full time Job Id 1180663 No Automation NoSQL Data Engineer This role has been designed as ‘’Onsite’ with an expectation that you will primarily work from an HPE partner/customer office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: HPE Operations is our innovative IT services organization. It provides the expertise to advise, integrate, and accelerate our customers’ outcomes from their digital transformation. Our teams collaborate to transform insight into innovation. In today’s fast paced, hybrid IT world, being at business speed means overcoming IT complexity to match the speed of actions to the speed of opportunities. Deploy the right technology to respond quickly to market possibilities. Join us and redefine what’s next for you. What you will do: Think through complex data engineering problems in a fast-paced environment and drive solutions to reality. Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools. Provide engineering-level support for data tools and systems deployed in customer environments. Respond quickly and professionally to customer emails/requests for assistance. What you need to bring: Bachelor’s degree in Computer Science, Information Systems, or equivalent. 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems. Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies. Technical Skills: Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments. Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow. Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs. Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution. Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs. Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines. Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling. Solid experience in troubleshooting and optimizing performance in distributed data systems. Expertise in automated deployment and infrastructure management using tools such as Terraform, Chef, Ansible, Kubernetes, or similar technologies. Experience with CI/CD pipelines using tools like Jenkins, GitLab CI, Bamboo, or similar. Strong knowledge of scripting languages such as Python, Bash, or Go for automation, provisioning Platform-as-a-Service, and workflow orchestration. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Growth, Client Expectations Management, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Customer Centric Solutions, Customer Relationship Management (CRM), Design Thinking, Empathy, Follow-Through, Growth Mindset, Information Technology (IT) Infrastructure, Infrastructure as a Service (IaaS), Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity, Process Improvements, Product Services, Relationship Building {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #operations Job: Services Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.

Posted 6 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0525-0430 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Senior Software Engineer Position: Senior Software Engineer- Node, AWS and Terraform Experience: 5-8 Years Category: Software Development/ Engineering Main location: Hyderabad/ Chennai/Bangalore Position ID: J0525-0430 Employment Type: Full Time Responsibilities: Design, develop, and maintain robust and scalable server-side applications using Node.js and JavaScript/TypeScript. Develop and consume RESTful APIs and integrate with third-party services. In-depth knowledge of AWS cloud including familiarity with services such as S3, Lambda, DynamoDB, Glue, Apache Airflow, SQS, SNS, ECS and Step Functions, EMR, EKS (Elastic Kubernetes Service), Key Management Service, Elastic MapReduce Handon Experience on Terraform Specializing in designing and developing fully automated end-to-end data processing pipelines for large-scale data ingestion, curation, and transformation. Experience in deploying Spark-based ingestion frameworks, testing automation tools, and CI/CD pipelines. Knowledge of unit testing frameworks and best practices. Working experience in databases- SQL and NO-SQL (preferred)-including joins, aggregations, window functions, date functions, partitions, indexing, and performance improvement ideas. Experience with database systems such as Oracle, MySQL, PostgreSQL, MongoDB, or other NoSQL databases. Familiarity with ORM/ODM libraries (e.g., Sequelize, Mongoose). Proficiency in using Git for version control. Understanding of testing frameworks (e.g., Jest, Mocha, Chai) and writing unit and integration tests. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Design and implement efficient database schemas and ensure data integrity. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to coding standards. Troubleshoot and debug issues in development and production environments. Knowledge of security best practices for web applications (authentication, authorization, data validation). Strong communication and collaboration skills. Effective communication skills to interact with technical and non-technical stakeholders. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Skills: Node.Js RESTful (Rest-APIs) Terraform What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 6 days ago

Apply

Exploring Apache Jobs in India

Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.

Average Salary Range

The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum

Career Path

In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect

Related Skills

Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing

Interview Questions

  • What is Apache HTTP Server and how does it differ from Apache Tomcat? (medium)
  • Explain the difference between Apache Hadoop and Apache Spark. (medium)
  • What is mod_rewrite in Apache and how is it used? (medium)
  • How do you troubleshoot common Apache server errors? (medium)
  • What is the purpose of .htaccess file in Apache? (basic)
  • Explain the role of Apache Kafka in real-time data processing. (medium)
  • How do you secure an Apache web server? (medium)
  • What is the significance of Apache Maven in software development? (basic)
  • Explain the concept of virtual hosts in Apache. (basic)
  • How do you optimize Apache web server performance? (medium)
  • Describe the functionality of Apache Solr. (medium)
  • What is the purpose of Apache Camel? (medium)
  • How do you monitor Apache server logs? (medium)
  • Explain the role of Apache ZooKeeper in distributed applications. (advanced)
  • How do you configure SSL/TLS on an Apache web server? (medium)
  • Discuss the advantages of using Apache Cassandra for data management. (medium)
  • What is the Apache Lucene library used for? (basic)
  • How do you handle high traffic on an Apache server? (medium)
  • Explain the concept of .htpasswd in Apache. (basic)
  • What is the role of Apache Thrift in software development? (advanced)
  • How do you troubleshoot Apache server performance issues? (medium)
  • Discuss the importance of Apache Flume in data ingestion. (medium)
  • What is the significance of Apache Storm in real-time data processing? (medium)
  • How do you deploy applications on Apache Tomcat? (medium)
  • Explain the concept of .htaccess directives in Apache. (basic)

Conclusion

As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies