Jobs
Interviews

16012 Kafka Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300075

Posted 21 hours ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Full Stack Engineer Location: Bengaluru L&T Technology Services is seeking Full stack Engineer (Experience range - 5+ years) of experience, proficient in: Strong hands-on experience with Spring Boot and Microservices architecture for scalable application development. Apache Kafka for real-time data streaming and event-driven systems. Solid working knowledge of AWS services for deploying and managing cloud-native applications. Experience with at least one modern JavaScript framework: React.js or Angular , or Node.js for building responsive UIs or APIs. Ability to work in an agile environment, contribute to system design, and collaborate across DevOps, QA, and frontend/backend teams. Required Skills: Spring boot, Microservice, Kafka, AWS, React.js or Angular or Node.js

Posted 21 hours ago

Apply

8.0 - 11.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Scope We are a leading SaaS and AI-driven Global Supply Chain Solutions software product company and Glass Door's “Best Places to Work” The only company recognized as a Leader in 3 2021 Gartner Magic Quadrant reports covering supply chain planning solutions, transportation management systems, and warehouse management systems Our Current Technical Environment Software: Unix, Any scripting language, WMS application (Any), PL/SQL, API, MOCA Future Software – Kafka, Stratosphere, Microservices, Java Application Architecture: Native SaaS, Cognitive Cloud Architecture: Private cloud, MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD) What Will You Do Support Engagements: Work with global technical and functional teams to support various customer engagements. Customer Interaction: Understand customer requests, support designed products/solutions to meet business requirements, and ensure high customer satisfaction. Issue Resolution: Address and resolve technical issues adhering to SLAs, document learnings, and create knowledge articles. Environment Management: Replicate and maintain customer environments and knowledge of customer solution architecture and integration points. Customer Satisfaction: Provide quality and timely solutions to improve customer satisfaction and follow-up until closure. Stakeholder Interaction: Interact with internal and external stakeholders and report to management. Process Improvement: Identify areas for improvement and automation in routine tasks. Continuous Learning: Stay updated with new technologies and products, demonstrate quick learning ability, and maintain good interpersonal and communication skills. Architecture Simplification: Drive simpler, more robust, and efficient architecture and designs. Product Representation: Confidently represent product and portfolio, including vision and technical roadmaps, within the company and to strategic customers when necessary. Detailed Responsibilities Customer Issue Resolution: Understand customer-raised issues, especially in Cloud/SaaS environments, and take appropriate actions to resolve them. Code Review: Review product source code or design documents as necessary. Case Management: Own and resolve all cases for global customers, adhering to defined SLAs. Knowledge Sharing: Document learnings and create knowledge articles for repeated cases. Environment Replication: Replicate and maintain customer environments. Solution Knowledge: Maintain knowledge of customer solutions and customizations. Urgency in Interaction: Demonstrate a sense of urgency and swiftness in all customer interactions. Techno-Functional Point of Contact: Act as the techno-functional POC for all cases, ensuring timely triage and assignment. Global Collaboration: Utilize instant messenger and other tools to collaborate globally. Shift Work: Work in rotational shifts and be flexible with timings. Goal Achievement: Meet organizational and team-level goals. Customer Satisfaction: Improve customer satisfaction by providing quality and timely solutions and follow-up until case closure. Process Automation: Identify areas for improvement and scope for automation in routine tasks or activities. Team Player: Help in meeting team-level goals and be a team player. What We Are Looking For Educational Background: Bachelor’s degree (STEM preferred) with a minimum of 8 to 11 years of experience. Team Experience: Experience in working as a team. Skills: Good communication and strong analytical skills. Technical Proficiency: Experience in working with SQL/Oracle DB complex queries. Domain Knowledge: Fair understanding of the Supply Chain domain. Support Engineering Experience: Experience in support engineering roles. Techno-Functional Expertise: Possess strong techno-functional expertise. Tech Savviness: Ability to adapt to any technology quickly. Critical Issue Support: Provide technical and solution support during critical/major issues. Tool Experience: Experience with varied tools such as AppDynamics, Splunk, and ServiceNow. Shift Flexibility: Flexible to work in shift timings: Shift 1: 6 am to 3 pm Shift 2: 2 pm to 11 pm Shift 3: 10 pm to 7 am Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.

Posted 21 hours ago

Apply

11.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience with architecture and development in Java 8 or higher. Experience with front-end frameworks such as React, Redux, React.js, or Vue. Familiarity with Node.js and modern backend stacks. Deep knowledge of AWS, Azure, or GCP platforms and services. Strong experience with Azure DevOps, Git, Jenkins, and CI/CD pipeline. Deep understanding of design patterns, data structures, and microservices architecture. Strong knowledge of object-oriented programming, data structures, and algorithms. Experience with scalable system design, performance tuning, and application security. Familiarity with data integration patterns, middleware, and message brokers (e.g., Kafka, RabbitMQ). A good understanding of UML and design patterns. Strong experience with IBM Integration Composer & IBM ODM. Hands-on with container orchestration using Kubernetes, OpenShift. Working knowledge of security protocols like OAuth 2.0, SAML. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 21 hours ago

Apply

2.0 years

0 Lacs

India

On-site

We’re building the next-generation communications analytics and automation platform—one that fuses deep telemetry, enterprise-scale voice/calling data, and AI-driven remediation. As a Senior Backend Engineer , you'll play a core role in designing the resilient, scalable backend of a high-visibility platform that already drives action across global Microsoft Teams deployments. This isn’t a maintenance gig. This is architecture, orchestration, and ownership. You’ll help design microservices, implement scalable APIs, and ensure data flows seamlessly from complex real-time systems (like call quality diagnostics and device telemetry) into actionable intelligence and automation pipelines. If you’re excited by backend systems with real-world impact—and want to transition into intelligent agentic systems powered by GenAI—this role is built for you. What You'll Work On Platform Engineering (Core Backend) Design and implement robust, cloud-native services using modern backend stacks (Node.js, Python, .NET Core, or similar). Build scalable APIs to surface data and actions across TeamsCoreIQ modules (call analytics, device insights, policy management, AI-based RCA). Integrate with Microsoft Graph APIs and Teams Calling infrastructure (Auto Attendants, Call Queues, Call Quality, Presence, Policies). Develop event-driven workflows using queues (Service Bus, Kafka, RabbitMQ) for high-throughput ingestion and action pipelines. Work with real-time data stores, telemetry ingestion, and time-series analytics backends (PostgreSQL, MongoDB, InfluxDB, or equivalent). Infrastructure & DevOps Support Help scale and secure workloads using Azure, Kubernetes, and CI/CD pipelines (GitHub Actions, Azure DevOps). Implement observability practices—logging, metrics, alerting—for zero-downtime insights and RCA. Future-Forward (Agentic Track) Support the evolution of the backend toward intelligent agent orchestration: Build services that allow modular “agents” to retrieve, infer, and act (e.g. provisioning, remediation, escalation). Explore interfaces for integrating OpenAI, Azure AI, or RAG pipelines to make automation contextual and proactive. What You Bring Must-Have Technical Skills 2+ years backend engineering experience with production-grade systems. Strong proficiency in at least one modern backend language (Node.js, Python, Go, or .NET Core). Deep understanding of RESTful API design, GraphQL is a bonus. Experience building cloud-native apps on Azure (preferred), AWS or GCP. Familiarity with Microsoft ecosystem: Graph API, Teams, Entra ID (AAD), SIP/VoIP call data a big plus. Experience with relational and NoSQL databases; data modeling and performance tuning. Bonus (Not Mandatory, but Highly Valued) Exposure to AI/ML pipelines, LangChain, OpenAI API, or vector databases (Pinecone, Weaviate). Background in observability, root-cause analysis systems, or voice analytics. Experience with policy engines, RBAC, and multi-tenant SaaS platforms. Traits We Love Systems Thinker – You optimize for scale and understand how backend services interact across a distributed system. Builder’s DNA – You love to own, refine, and ship high-quality features fast. Learning Velocity – You’re interested in agentic architectures, GenAI, and eager to transition toward intelligent orchestration. Code Ethic – You write clean, maintainable, testable code—and always think security-first. Performance Expectations (First 30 Days) ü Ship a core modules with full test coverage and observability. ü Deliver API endpoints for at least one major module (e.g. RCA, Call Analytics, DeviceIQ). ü Draft and refine at least one reusable internal service that improves time-to-market for future agents. ü Collaborate with frontend, DevOps, and AI teams to support rapid iteration and experimentation. Tips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall.

Posted 21 hours ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Head of Architecture and Technology (Hands-On, High-Ownership) Company: Elysium PTE. LTD. Location: Chennai, Tamil Nadu — at office Employment Type: Full-time, permanent Compensation: ₹15 L fixed CTC + up to 5 % ESOP (performance-linked vesting, 4-year schedule with 1-year cliff) Reports to: Founding Team ________________________________________ About Elysium Elysium is a founder-led studio headquartered in Singapore with its delivery hub in Chennai. We are currently building a global gaming-based mar-tech platform while running a premium digital-services practice (branding, immersive web, SaaS MVPs, AI-powered solutions). We thrive on speed, experimentation and shared ownership. ________________________________________ The opportunity We’re looking for a hungry technologist who can work in an early-stage start-up along with the founders to build ambitious global products & services. You’ll code hands-on every week, shape product architecture, and grow a lean engineering pod—owning both our flagship product and client deliveries. ________________________________________ What you will achieve in your first 12 months • Co-ordinate & develop the In-house products with internal & external teams. • Build and mentor a six-to-eight-person engineering/design squad that hits ≥ 85 % on time delivery for IT-service clients. • Cut mean time-to-deployment to under 30 minutes through automated CI/CD and Infrastructure-as-Code. • Implement GDPR-ready data flows and a zero-trust security baseline across all projects. • Publish quarterly tech radars and internal playbooks that keep the team learning and shipping fast. ________________________________________ Day-to-day responsibilities • Resource management & planning using the internal & external teams with respect to our products & client deliveries. • Pair-program and review pull requests to enforce clean, testable code. • Translate product/user stories into domain models, sprint plans and staffing forecasts. • Design cloud architecture (AWS / GCP) that balances cost and scale; own IaC, monitoring and on-call until an SRE is hired. • Evaluate and manage specialist vendors for parts of the flagship app; hold them accountable on quality and deadlines. • Scope and pitch technical solutions in client calls; draft SoWs and high-level estimates with founders. • Coach developers and designers, set engineering KPIs, run retrospectives and post-mortems. • Prepare technical artefacts for future fundraising and participate in VC diligence. ________________________________________ Must-have Requirements • 5 – 8 years modern full-stack development with at least one product shipped to >10 k MAU or comparable B2B scale. • Expert knowledge of modern full-stack ecosystems: Node.js or Python or Go; React/Next.js; distributed data stores (PostgreSQL, DynamoDB, Redis, Kafka or similar). • Deep familiarity with AWS, GCP or Azure, including cost-optimized design, autoscaling, serverless patterns, container orchestration and IaC tools such as Terraform or CDK. • Demonstrated ownership of DevSecOps practices: CI/CD, automated testing matrices, vulnerability scanning, SRE dashboards and incident post-mortems. • Excellent communication skills, able to explain complex trade-offs to founders, designers, marketers and non-technical investors. • Hunger to learn, ship fast, and own meaningful equity in lieu of a senior-corporate pay check. ________________________________________ Nice-to-have extras • Prior work in fintech, ad-tech or loyalty. • Experience with WebGL/Three.js, real-time event streaming (Kafka, Kinesis), LLM pipelines & Blockchain. • Exposure to seed- or Series-A fundraising, investor tech diligence or small-team leadership. ________________________________________ What we offer • ESOP of up to 5 % on a 4-year vest (1-year cliff) with performance accelerators tied to product milestones. • Direct influence on tech stack, culture and product direction—your code and decisions will shape the company’s valuation. • A team that values curiosity, transparency and shipping beautiful work at start-up speed. ________________________________________

Posted 22 hours ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

Company Size Mid-Sized Experience Required 10 - 15 years Working Days 5 days/week Office Location Delhi Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Preferred Qualifications Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company. Perks, Benefits and Work Culture Testimonial from a designer: 'One of the things I love about the design team at Wingify is the fact that every designer has a style which is unique to them. The second best thing is non-compliance to pre-existing rules for new products. So I just don't follow guidelines, I help create them.' Skills: infrastructure,soc2,ansible,drive,data governance,redshift,gdpr,javascript,cassandra,design,spring boot,jenkins,docker,mongodb,java,tidb,elk,python,php,aws,snowflake,lld,chef,bigquery,gcp,golang,html,data,kafka,grafana,kubernetes,scala,css,hadoop,azure,redis,sql,data processing,spark,hld,node.js,google guice,compliance

Posted 22 hours ago

Apply

6.0 - 8.0 years

0 - 0 Lacs

bangalore, noida, chennai

Remote

Sr IT Data Analyst We are currently seeking a Sr IT Data Analyst to perform data analysis for a data warehouse/operational data store, data marts, and other data stores in support of the Optum business. The new hire will define and maintain business intelligence/data warehouse methodologies, standards, and industry best practices. You will work with Development and QA team to develop data delivery/processing solutions and to create Data Dictionary with full description of data elements and their usage. Responsibilities Include: Gather business requirements for analytical applications in iterative/agile development model partnering with Business and IT stakeholders Create source-to-target mapping based on requirements Create rules definitions, data profiling and transformation logic Gather and prepare analysis based on requirements from internal and external sources to evaluate and demonstrate program effectiveness and efficiency, and problem solving Support Data Governance activities and be responsible for data integrity Developing scalable reporting processes and querying data sources to conduct ad hoc analyses/detailed data profiling. Research complex functional data/analytical issues Assume responsibility for data integrity, data quality among various internal groups and/or between internal and external sources Provides source system analysis and perform gap analysis between source and target systems Requirements: 5+ years of Healthcare business and data analysis experience Proficient in SQL, understands data modeling and storage concepts like Snowflake Must have an aptitude for learning new data flows quickly and participate in data quality and automation discussions. Be comfortable in working as SME educating data consumers on data profiles and issues. Must be able to take end to end responsibility in quickly solving data issues in production setting. Knowledge of Data Platforms, Data As a Service model and DataOps practices Preferred Qualifications: Highly preferred is working knowledge of Kafka, Databricks, GitHub, Airflow, Azure HealthCare Industry Claims and Eligibility experience Experience in Python scripts Knowledge of AI models

Posted 22 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Java Full Stack Developer Exp: 5+ Years Mandate Skill: Spring Boot for backend development and be proficient in ReactJS for front-end development Required Skills ü Backend: Java, Spring Boot, Microservices, REST APIs, JPA/Hibernate ü Frontend: ReactJS, JavaScript, TypeScript, Redux ü Database: PostgreSQL, MySQL, MongoDB ü Cloud & DevOps: Docker, Kubernetes, CI/CD, GitHub Actions or Jenkins ü Messaging & Caching: Kafka, Redis ü Agile Practices: Jira, Confluence, Scrum Salary: Max 2000000 LPA We are looking for a mid-level full stack developer with a strong backend focus to join our team. The ideal candidate should have hands-on experience in Spring Boot for backend development and be proficient in ReactJS for front-end development . The candidate will be responsible for developing, enhancing, and maintaining enterprise applications while working in an Agile environment. Key Responsibilities Backend Development: Design, develop, and maintain RESTful APIs using Spring Boot and Java. Implement microservices architecture and ensure high-performance applications. Work with relational and NoSQL databases, optimizing queries and performance. Integrate with third-party APIs and messaging queues (Kafka, RabbitMQ). Frontend Development: Build and maintain user interfaces using ReactJS and modern UI frameworks. Ensure seamless API integration between front-end and back-end systems. Implement reusable components and optimize front-end performance. DevOps & Deployment: Work with Docker and Kubernetes for application deployment. Ensure CI/CD pipeline integration and automation. Collaboration & Agile Process: Work closely with onshore and offshore teams in a POD-based delivery model. Participate in daily stand-ups, sprint planning, and retrospectives. Write clean, maintainable, and well-documented code following best practices. Preferred Qualifications Prior experience working on Albertsons projects is a huge plus. Familiarity with Google Cloud Platform (GCP) or any cloud platform. Exposure to monitoring tools like Prometheus, Grafana. Strong problem-solving skills and ability to work independently.

Posted 22 hours ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title: Software Engineer - Backend (Python) Experience: 7+ Years Location : Hyderabad About the Role: Our team is responsible for building the backend components of the GenAI Platform. The Platform Offers Safe, compliant and cost-efficient access to LLMs, including Opensource & Commercial ones, adhering to Experian standards and policies Reusable tools, frameworks and coding patterns to perform various functions involved in either fine-tuning a LLM or developing a RAG-based application What you'll do here Design & build backend components of our GenAI platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills At least 7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with web development frameworks such as Flask, Django or FastAPI. Experience with concurrent programming designs such as AsyncIO. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with unit and functional testing frameworks. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python

Posted 22 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title : Software Engineer - Backend (Python) About The Role Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What You'll Do Here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Primary Skills Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS

Posted 22 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

💼Job Title: Kafka Developer 👨 💻Job Type: Fulltime 📍Location: Pune 💼Work regime: Hybrid 🔥Keywords: Kafka, Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Position Overview: We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence What you will Have:- Responsibilities: Key Responsibilities : Develop Kafka producers to ingest flow records from upstream systems such as flow record exporters (e.g., IPFIX-compatible probes). Build Kafka consumers to stream data into Spark Structured Streaming jobs and downstream data lakes. Define and manage Kafka topic schemas using Avro and Schema Registry for schema evolution. Implement message serialization, transformation, enrichment, and validation logic within the streaming pipeline. Ensure exactly once processing, checkpointing, and fault tolerance in streaming jobs. Integrate with downstream systems such as HDFS or Parquet-based data lakes, ensuring compatibility with ingestion standards. Collaborate with Kafka administrators to align topic configurations, retention policies, and security protocols. Participate in code reviews, unit testing, and performance tuning to ensure high-quality deliverables. Document pipeline architecture, data flow logic, and operational procedures for handover and support. Required Skills & Qualifications : Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes) A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023

Posted 23 hours ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role: Senior Software Engineer Experience Required : 4-6 years Skills: Java, Springboot Location : Sector 16 , Noida Work Mode: 5 days (Work from Office) Interview Mode : Face2Face Notice Period: Immediate/Serving only About Times Internet At Times Internet, we create premium digital products that simplify and enhance the lives of millions. As India’s largest digital products company, we have a significant presence across a wide range of categories, including News, Sports, Fintech, and Enterprise solutions. Our portfolio features market-leading and iconic brands such as TOI, ET, NBT, Cricbuzz, Times Prime, Times Card, Indiatimes, Whatshot, Abound, Willow TV, Techgig and Times Mobile among many more. Each of these products is crafted to enrich your experiences and bring you closer to your interests and aspirations. As an equal opportunity employer, Times Internet strongly promotes inclusivity and diversity. We are proud to have achieved overall gender pay parity in 2018, verified by an independent audit conducted by Aon Hewitt. We are driven by the excitement of new possibilities and are committed to bringing innovative products, ideas, and technologies to help people make the most of every day. Join us and take us to the next level! About the Business Unit: Architecture and Group Initiatives (AGI) AGI owns the world-class Enterprise CMS solutions that empower all digital newsrooms within Times Internet and beyond. The solutions include state-of-the-art authoring tools with AI-enabled generative and assistive features, analytics and reporting tools and services that easily scale to the millions of requests per minute. This unique scaling need and engineering of state-of-the-art products make AGI a place of constant evolution and innovation across product, design and engineering in the ever-growing digital and print media industry landscape. About the role: We seek a highly skilled and experienced Java Senior Software Engineer to join our dynamic team who can play a key role in designing, developing, and maintaining our Internet-based applications. As a Senior Engineer, you have to actively participate in designing and implementing projects with high technical complexity, scalability, and performance implications. You will collaborate with cross-functional teams to deliver high-quality software solutions that meet customer needs and business objectives. Roles and Responsibilities Design, development, and testing of large-scale and high-performance web applications and frameworks. Create reusable frameworks through hands-on development and unit testing. Write clean, efficient, and maintainable code following best practices and coding standards. Troubleshoot and debug issues, and implement solutions on time. Participate in architectural discussions and contribute to the overall technical roadmap. Stay updated on emerging technologies and trends in Java development, and make recommendations for adoption where appropriate. Skills Required: Bachelor's degree in Computer Science, Engineering, or a related field. 4+ years of hands-on experience in Java development, with a strong understanding of core Java concepts and object-oriented programming principles. Proficiency in Spring framework, including Spring Boot, Spring MVC, and Spring Data. Experience with Kafka for building distributed, real-time streaming applications. Strong understanding of relational databases such as MySQL, including schema design and optimization. Proficiency in writing SQL Queries is a must. Experience with NoSQL Databases such as MongoDB, and Redis. Experience with microservices architecture and containerization technologies such as Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Knowledge of software development lifecycle methodologies such as Agile or Scrum. Strong communication and collaboration skills. Ability to work effectively in a fast-paced environment and manage multiple priorities. Self-motivation and the ability to work under minimal supervision.

Posted 23 hours ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 23 hours ago

Apply

0 years

0 Lacs

Delhi, India

On-site

Description Skills Required: Bash/Shell scripting Git Hub ETL Apache Spark Data validation strategies Docker & Kubernetes (for containerized deployments) Monitoring tools: Prometheus, Grafana Strong in python Grafana-Prometheus, PowerBI/Tableau (important) Requirements Extensive hands-on experience implementing data migration and data processing Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring Experience with building and implementing Big Data platforms On-Prem or On Cloud, covering ingestion (Batch and Real-time), processing (Batch and real-time), Polyglot Storage, Data Access Good understanding of Data Warehouse, Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog Proven understanding and demonstrable implementation experience of big data platform technologies on the cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc. Experience with source code management tools such as TFS or Git Knowledge of DevOps with CICD pipeline setup and automate Building and integrating systems to meet the business needs Defining features, phases, and solution requirements and providing specifications accordingly Experience building stream-processing systems, using solutions such as Azure Even Hub/ Kafka etc. Strong experience with data modeling and schema design Strong knowledge in SQL and no-sql Database and/or BI/DW. Excellent interpersonal and teamwork skills Experience With Leading And Mentorship Of Other Team Members Good knowledge of Agile Scrum Good communication skills Strong analytical, logic and quantitative ability. Takes ownership of a task. Values accountability and responsibility. Quick learner Job responsibilities ETL/ELT processes, data pipelines, Big Data platforms (On-Prem/Cloud), data ingestion (Batch/Real-time), data processing, Polyglot Storage, Data Governance, Cloud (AWS/Azure), IAM, SSO, Cluster monitoring, Log Analytics, source code management (Git/TFS), DevOps, CICD automation, stream processing (Kafka, Azure Event Hub), data modeling, schema design, SQL/NoSQL, BI/DW, Agile Scrum, team leadership, communication, analytical skills, ownership, quick learner What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 23 hours ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 23 hours ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 23 hours ago

Apply

14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities. Responsibilities: Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 14+ years of relevant experience in Apps Development or systems analysis role Extensive experience system analysis and in programming of software applications Experience in managing and implementing successful projects Subject Matter Expert (SME) in at least one area of Applications Development Ability to adjust priorities quickly as circumstances dictate Demonstrated leadership and project management skills Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Knowledge/Experience: 14+ years of industry experience Experience of Agile development and scrums Strong knowledge on Core Java, Spring(Core, Boot etc), Expertise in Web API implementations (Web services, Restful services etc.) . Good understanding of Linux or Unix operating systems. Strong knowledge on build (Ant/Maven), continuous integration (Jenkins), code quality analysis (SonarQube) and unit and integration testing (JUnit) Exposure to SCM tool like bitbucket . Candidates with strong knowledge of Docker / Kubernetes / OpenShift.. Strong knowledge of distributed messaging platforms like (Apache Kafka, RabbitMQ etc) Good understanding of No SQL database like Mongo Db. Skills: Hands on coding experience on Core Java and Spring Hands on coding experience in python is a plus. Strong analysis and design skills including OO design patterns Solid understanding of SOA concepts, RESTful API design Ability to produce professional, technically-sound, and visually-appealing presentations and architecture designs Experience creating high level technical/process documentation and presentations for audiences at various levels. Experience writing/editing technical, business, and process documentation in an Information Technology/Engineering environment Must be able to understand requirements & convert to technical design and code Knowledge of source code control systems, unit test framework, build and deployment tools Experienced with large scale programs rollout and ability to create and maintain detailed WBS and project plans. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. VOIS India In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VOIS India supports global markets and group functions of Vodafone and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Role Purpose Mode : Hybrid Location : Pune Experience : 5 to 8 years Core Competencies, Knowledge And Experience 5-7 years’ experience in managing large data sets, simulation/ optimization and distributed computing tools. Excellent communication & presentation skills with track record of engaging with business project leads. Role Purpose Primary responsibility is to define data lifecycle, including data models and data sources for analytics platform, gathering data from business and cleaning them in order to provide ready-to-work inputs for Data Scientists Apply strong expertise in in automating end to end data science pipelines & big data pipelines (Collect, ingest, store , transform and optimize scale) The incumbent will work on the assigned projects & it's stakeholder alongside Data Scientists to understand the business challenges faced by them. The work involves working with large data sets, simulation/ optimization and distributed computing tools. The candidate works with the assigned business stakeholder(s) to agree scope, deliverables, process and expected outcomes from the products and services developed. Must Have Technical / Professional Qualifications Experience working with large data sets, simulation/ optimization and distributed computing tools Experience in transformation data with Apache spark for Data Science activities Experience in working with distributed storage on cloud (AWS/GCP) or HDFS Experience in building data pipelines with Airflow Experience in ingesting data from different sources using Kafka/Sqoop/Flume/ Nifi Experience in solving simple to complex big data platform/framework issues Experience in building real time analytics system with Apache Spark, Flink & Kafka Experience in Scala, Python, Java & R Experience in working with NoSQL databases (Cassandra, Mongo DB, HBase, Redis) Key Accountabilities And Decision Ownership Understand the data science problems and design & schedule end to end pipelines For the given problem identify the right big data technologies to solve the problem in an optimized way Automate the data science pipelines, deploy ML algorithms and track the performance Build customer 360, feature store for different machine learning problems Build data model for machine learning feature store on high velocity, flexible schema databases VOIS Equal Opportunity Employer Commitment VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!

Posted 1 day ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

About This Role Are you interested in building innovative technology that crafts the financial markets? Do you like working at the speed of a startup, and solving some of the world’s most exciting challenges? Do you want to work with, and learn from, hands-on leaders in technology and finance? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $9 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Being a Member Of Aladdin Engineering, You Will Be Tenacious: Work in a fast paced and highly complex environment Creative thinker: Analyse multiple solutions and deploy technologies in a flexible way. Great teammate: Think and work collaboratively and communicate effectively. Fast learner: Pick up new concepts and apply them quickly. Responsibilities Include Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. Significantly contribute to development of Aladdin’s global, multi-asset trading platform. Work with product management and business users to define the roadmap for the product. Design and develop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Ensure resilience and stability through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support. Be a leader with vision and a partner in brainstorming solutions for team productivity, efficiency, guiding and motivating others. Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions and employee engagement. For VP Level: In addition to the above, a VP level candidate should be able to lead individual projects priorities, deadlines and deliverables. Qualifications B.S. / M.S. degree in Computer Science, Engineering, or a related subject area B.E./ B.TECH./ MCA or any other relevant engineering degree from a reputed university. For VP Level: 8+ years of proven experience Skills And Experience A proven foundation in C++ and related technologies in a multiprocess distributed UNIX environment Knowledge of Java, Perl, and/or Python are a plus Track record building high quality software with design-focused and test-driven approaches Experience with working with an extensive legacy code base (e.g., C++ 98) Understanding of performance issues (memory, processing time, I/O, etc.) Understanding of relational databases is a must. Great analytical, problem-solving and communication skills Some experience or a real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions. For VP Level: In addition to the above, a VP level candidate should have experience leading development teams, projects or being responsible for the design and technical quality of a significant application, system, or component. Ability to form positive relationships with partnering teams, sponsors, and user groups. Nice To Have And Opportunities To Learn Expertise in building distributed applications using SQL and/or NOSQL technologies like MS SQL, Sybase, Cassandra or Redis A real-world practitioner of applying cloud-native design patterns to event-driven microservice architectures. Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Exposure to building microservices and APIs ideally with REST, Kafka or gRPC Experience with optimization, algorithms or related quantitative processes. Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with cloud deployment technology (Docker, Ansible, Terraform, etc.) is also a plus. Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. Exposure to Docker, Kubernetes, and cloud services is beneficial. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 day ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Description Skills Required: Bash/Shell scripting Git Hub ETL Apache Spark Data validation strategies Docker & Kubernetes (for containerized deployments) Monitoring tools: Prometheus, Grafana Strong in python Grafana-Prometheus, PowerBI/Tableau (important) Requirements Extensive hands-on experience implementing data migration and data processing Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring Experience with building and implementing Big Data platforms On-Prem or On Cloud, covering ingestion (Batch and Real-time), processing (Batch and real-time), Polyglot Storage, Data Access Good understanding of Data Warehouse, Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog Proven understanding and demonstrable implementation experience of big data platform technologies on the cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc. Experience with source code management tools such as TFS or Git Knowledge of DevOps with CICD pipeline setup and automate Building and integrating systems to meet the business needs Defining features, phases, and solution requirements and providing specifications accordingly Experience building stream-processing systems, using solutions such as Azure Even Hub/ Kafka etc. Strong experience with data modeling and schema design Strong knowledge in SQL and no-sql Database and/or BI/DW. Excellent interpersonal and teamwork skills Experience With Leading And Mentorship Of Other Team Members Good knowledge of Agile Scrum Good communication skills Strong analytical, logic and quantitative ability. Takes ownership of a task. Values accountability and responsibility. Quick learner Job responsibilities ETL/ELT processes, data pipelines, Big Data platforms (On-Prem/Cloud), data ingestion (Batch/Real-time), data processing, Polyglot Storage, Data Governance, Cloud (AWS/Azure), IAM, SSO, Cluster monitoring, Log Analytics, source code management (Git/TFS), DevOps, CICD automation, stream processing (Kafka, Azure Event Hub), data modeling, schema design, SQL/NoSQL, BI/DW, Agile Scrum, team leadership, communication, analytical skills, ownership, quick learner What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

About This Role About the role: You can work with us at one of top FinTech companies. We sell our Aladdin platform to over 200 of the top global corporations, in total managing about quarter of all the world’s money under management. BlackRock is global but close-knit team of individuals who share a common goal of providing the very best possible level of support to our business partners and customers. From the top of the firm down, we embrace the diversity of values, identities and ideas brought by our employees. We are serious about our people and offer Flexible Time Off, collaborative working spaces and several other benefits. An individual selected this position will have the responsibility to cover business-critical compute workloads, real-time / interactive processing, data transfer services, application and new technology on-boarding and upgrades, and recovery procedures. The international team is split into 4 global regions to provide 24*7*365 support. Additional responsibilities may include developing more cost effective and predictable methods for supporting a growing technology infrastructure and working with internal Development Groups to manage application changes as they are released to production environments. Onboarding new technologies, assisting in proof of concept build outs and disaster recovery testing and planning. If any of this excites you, we want to talk to you. Team Overview The Service Management Operations Group is responsible for monitoring, supporting, and administering production environments for all BlackRock businesses (including subsidiaries and BlackRock Solutions clients) acting as a first responder relative to troubleshooting, problem resolution, and escalation. Collaborating with skilled professionals across the globe and managing a broad range of technologies and applications, the Operations Group delivers service quality and excellence through teamwork, innovating operational processes, and being part of the One BlackRock culture. Role Responsibility You will have complete ownership of ensuring that changes were fully completed, and any affected services restored. You will identify process improvements for change implementation and weekend checkouts; aid is incident management and root cause analysis. Provide ongoing operational support for the Aladdin infrastructure. Supporting and fix both batch processing and interactive user applications to ensure the high availability of the Aladdin Environment Uses various tools to conduct analysis on system performance, root cause diagnostics, and systems’/applications’ design to understand and improve the operating quality of production environments. Engage in clear and concise communications both verbally and in writing. Effectively interacts on incident bridges and calls to ensure all distributed team members are constantly informed. Engineer solutions to expedite recovery of environment post weekend maintenances. Weekend Shift Work: You might be required to work on the weekend shifts on rotational basis Qualifications 2-3 years of experience with a four-year degree specializing in Computer Science, MIS, Mathematics, Physics, or Engineering. Exposure to (or strong interest in) 1+ years of experience as Dev Operations Engineer Good understanding of Linux administration fundamentals; must be familiar with typical administrative commands. Prior system administration experience highly desirable. Programming experience in at least one of the following: Java, Python or Perl or shell scripting experience. Candidates must have a strong interest and skills for quickly learning new technologies and proprietary systems. Possess a positive demeanor and ability to work as a teammate in fast paced environment. Build opportunities to integrate and automate operational processes, procedures, and tooling. Experience in working with Cloud Native Platforms e.g., Azure, AWS, GCP etc. Pluses Prior experience with any of these technologies: Ansible, Chef, Jenkins, AWX, Service Now, Cutover, Autosys, Kafka, Kubernetes. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 day ago

Apply

2.0 - 3.0 years

0 Lacs

Vasai Virar, Maharashtra, India

On-site

What is Contentstack? Contentstack is on a mission to deliver the world's best digital experiences through a fusion of cutting-edge content management, customer data, personalization and AI technology. Iconic brands, such as AirFrance KLM, ASICS, Burberry, Mattel, Mitsubishi and Walmart, depend on the platform to rise above the noise in today's crowded digital markets and gain their competitive edge. Contentstack and its employees are dedicated to the customers and communities they serve. The company is recognized for its unmatched customer care and tradition of giving back globally through the Contentstack Cares program, including proud support of Pledge 1% and Girls Who Code. Learn more at www.contentstack.com. Who Are We? At Contentstack we are more than colleagues, we are a tribe. Our vision is to pursue equity among our communities, employees, partners, and customers. We are global-diverse yet close; distributed yet connected. We are dreamers and dreammakers who challenge the status quo. We do the right thing, even when no one is watching. We are curious trendspotters and brave trendsetters. Our mission is to make Contentstack indispensable for organizations to tell their stories and to connect with the people they care about through inspiring, modern experiences. We care deeply about our customers and the communities we serve. #OneTeamOneDream. Chalo, let's go! What Are We Looking For? Contentstack is looking for a Fullstack Engineer - ReactJS (Frontend) / NodeJS (Backend) who can work with our Editorial Experience. Roles & Responsibilities: Work across the stack, from a code commit to running it in production, with the end goal of delivering the best possible experience for the user Design, develop and test features from inception to rollout Write high quality code that is scalable, testable, maintainable and reliable Independently own and drive new features from scratch Work in an Agile environment and facilitate agile practices Champion best practices and cross-functional skill development Required skill sets: 2-3 years of product and application development experience Experience working with React JS on the frontend and NodeJS on the backend Working experience with any NoSQL databases like MongoDB, DynamoDB or Redis or PostgreSQL Good experience and understanding of working with Microservice-based Architecture. Good knowledge of AWS, Kubernetes, Kafka, GraphQL, GRPC, etc is preferred Experience with frameworks like ExpressJS, NestJS, Redux, Redux Saga, Storybook etc is preferred Past experience tackling scaling issues is preferred Experience practicing Agile software development methods is preferred. Flexible and curious in adapting to new technologies and trends. Experience : 2-3 years Location: Virar, Vassai Skills: React.JS, NodeJS, NoSQL (MongoDB or Redis) What Do We Offer? Interesting Work | We hire curious trendspotters and brave trendsetters. This is NOT your boring, routine, cushy, rest-and-vest corporate job. This is the "challenge yourself" role where you learn something new every day, never stop growing, and have fun while you're doing it. Tribe Vibe | We are more than colleagues, we are a tribe. We have a strict "no a**hole policy" and enforce it diligently. This means we spend time together - with spontaneous office happy hours, organized outings, and community volunteer opportunities. We are a diverse and distributed team, but we like to stay connected. Bragging Rights | We are dreamers and dream makers. Our efforts pay off and we work with the most prestigious brands, from big-name retailers to airlines, to professional sports teams. Your contribution will make an impact with many of the most recognizable names in almost every industry including AirFrance KLM, ASICS, Burberry, Mattel, Mitsubishi, Walmart, and many more! One Team One Dream | This is one of our values, and it shows. We don't believe in artificial hierarchies. If you're part of the tribe, you have an opportunity to contribute. Your voice will be heard and you will also receive regular updates about the business and its performance. Which, btw, is through the roof, so it's a great time to be joining… To review our Privacy Policy, please click here.

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join our Team About this opportunity: Ericsson invites applications for the role of DevOps Engineer. In this challenging and fulfilled position, you will be responsible for performing the detailed design of application and technical architecture components and classes according to the specification provided by the System Architect. The role also involves coding Software components and contributing to the early testing phases, as well as extending your support towards system testing. Responsibilities Include: Design and build automated pipelines for media ingest processing and distribution. Implement and maintain CI/CD workflows tailored for media-centric applications and services. Architect and manage scalable cloud infrastructure (AWS) for high-availability media pipelines Working on ways to automate and improve development and release processes Implement robust monitoring/logging to ensure system reliability and performances Implement DevSecOps best practices in pipeline management and cloud access control. Also ensuring that systems are safe and secure against cybersecurity threats. Work cross functionally with Software engineers, broadcast teams and operations to align on technical requirements Mentor junior engineers and help establish best practices for DevOps in media environments. Testing and examining code written by others and analyzing results Develop internal tools, scripts (Java, Python, Bash, Node.js) and use Cloud formation or similar tools to streamline media integration tasks and infrastructure as code (IaC) Able to assist/perform software upgrades/migrations in project. Maintain comprehensive documentation of pipelines, architectures and integration touchpoints in confluence. Provide reports and analysis on cost optimization, system performance- optional Provide training sessions and documentation to operations and support teams for new solutions. Identify areas of improvement in existing workflows and contribute to strategic enhancements. Plan, implement, and manage changes, adhering to established change control processes. Stay updated with industry trends and emerging technologies to improve solution design and delivery Technical Requirement Must Have: Strong AWS services knowledge (EC2, S3, Lambda, RDS, etc.). Expertise in CI/CD pipelines (Jenkins, Sonar, Git etc.). Proficiency in container technologies, with a focus on Kubernetes. Experience with serverless, Kafka, Elasticsearch. Strong programming skills in Python or scripting languages. Experience with monitoring & logging tools (CloudWatch, ELK) Supportive: Hands-on experience with database administration and tuning like graph DB, Dynamo DB Good to have: Understanding of various IP networking and common protocols such as FTP, SFT Knowledge of broadcast video formats, protocols, and encoding standards. Core Competencies: Agile ways of working Good Communication Skill. Proficiency with the English Language Flexibility to work in different time zones Fast learner and good team player Must have a positive approach to change, the ability to understand other cultures and the ability to adapt to, benefit from and respect cultural differences. Qualification and Experience: 5-9 Years relevant experience in IT Industry Bachelor’s degree in computer engineering/information technology or equivalent Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Chennai Req ID: 770407

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Key Responsibilities Leadership & Mentoring Lead a team of Java developers, providing guidance, mentorship, and technical expertise. Facilitate effective communication across teams and stakeholders, ensuring alignment on project goals. Conduct code reviews, ensuring high-quality standards, and provide constructive feedback. Collaborate with Product Managers, Architects, and other stakeholders to define technical requirements. Design & Architecture Design and implement scalable, maintainable, and high-performance Java applications. Define and maintain application architecture, ensuring consistency and scalability. Lead architectural discussions and decisions, ensuring solutions meet business requirements and technical specifications. Development & Coding Write clean, efficient, and reusable Java code using best practices. Ensure that solutions adhere to coding standards and follow industry best practices for performance, security, and scalability. Develop RESTful APIs and integrate third-party services and applications. Leverage Java frameworks and tools such as Spring, Hibernate, and Maven to build applications. Continuous Improvement Drive continuous improvement in development processes, tools, and methodologies. Keep up to date with new technologies, frameworks, and tools in the Java ecosystem and evaluate their potential benefits. Promote DevOps practices and help implement automated testing and CI/CD pipelines. Problem Solving & Troubleshooting Analyze and troubleshoot issues in production environments. Optimize existing systems and resolve performance bottlenecks. Ensure that solutions are designed with reliability, maintainability, and extensibility in mind. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 8+ years of experience in software development with a strong focus on Java and related technologies. Proven experience as a Tech Lead, Senior Developer, or Software Engineer in Java-based application development. Expertise in Java frameworks like Spring, Hibernate, and Spring Boot. Experience with microservices architecture and cloud platforms. Strong Experience in Kafka, RabbitMQ, Postgres. Strong knowledge of RESTful APIs, databases (SQL/NoSQL), and caching technologies (Redis, Memcached). Familiarity with tools such as Maven, Git, Docker, and Kubernetes. Experience with Agile development methodologies (Scrum/Kanban). Strong analytical and problem-solving skills, with a passion for delivering high-quality software solutions. Excellent communication and leadership skills, with the ability to mentor and collaborate with cross-functional teams. Skills: maven,sql,redis,restful apis,aws,git,leadership,elasticsearch,spring boot,rabbitmq,microservices,problem-solving,cloud platforms,postgres,kafka,memcached,docker,devops,kubernetes,hibernate,java,agile methodologies,spring,sql/nosql databases,nosql,mentoring

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies