Jobs
Interviews

2905 Dynamodb Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

You will be responsible for developing reliable server-side logic for serverless applications using public cloud services such as Google Cloud and AWS. With more than 3 years of experience, you should have a strong proficiency in Node.js, JavaScript, and understanding of asynchronous programming. Your role will involve developing distributed systems that are scalable, reliable, and efficient. Your expertise in serverless architectures, including AWS Lambda, DynamoDB, Firebase Real Time Database, Google Cloud SQL, and Cloud Tasks, will be essential. Experience with NoSQL and SQL databases, creating database schemas, and integrating multiple data sources will be required. Additionally, you will be involved in deploying, maintaining, debugging live systems, and end-to-end testing. You should have experience in creating micro-services architectures, REST APIs, data processing pipelines, and be familiar with various application architectures. Knowledge of code design practices, exploring new frameworks, and adhering to project management methodologies like Agile is expected. Extra points for Google, AWS certifications, and familiarity with object storage, in-memory caches, and security practices. Your responsibilities will include developing server-side logic for real-time multiplayer gaming backends, designing low-latency, high-availability systems, and maintaining databases for optimal performance. You will have ownership of designing, developing, debugging, and scaling backend services to ensure an exceptional user experience. Skills required include Java, NoSQL databases, and API integration.,

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

indore, madhya pradesh

On-site

Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,

Posted 3 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Greenlight is the leading family fintech company on a mission to help parents raise financially smart kids. We proudly serve more than 6 million parents and kids with our award-winning banking app for families. With Greenlight, parents can automate allowance, manage chores, set flexible spend controls, and invest for their family’s future. Kids and teens learn to earn, save, spend wisely, and invest. At Greenlight, we believe every child should have the opportunity to become financially healthy and happy. It’s no small task, and that’s why we leap out of bed every morning to come to work. Because creating a better, brighter future for the next generation depends on it. Greenlight is looking for a Staff Engineer, Production Operations to join our growing team! As a Staff Engineer, you will be a technical leader and individual contributor within our production operations function. You will be responsible for designing, building, and maintaining highly reliable, scalable, and performant cloud infrastructure and systems. You will play a critical role in driving technical excellence, mentoring junior engineers, and solving our most complex scalability and reliability challenges. What you will be doing: Lead the design, implementation, and evolution of Greenlight's core cloud infrastructure and SRE practices to ensure high availability, scalability, and performance Act as a technical authority for complex SRE and cloud engineering challenges, providing expert guidance and solutions Drive significant architectural improvements to enhance system reliability, resilience, and operational efficiency Develop, maintain, and optimize our cloud infrastructure using Infrastructure as Code (primarily Terraform) and automation tools Collaborate closely with development and security teams to embed SRE principles into the software development lifecycle, promoting secure and reliable coding practices Design and implement robust monitoring, logging, and alerting solutions to provide comprehensive visibility into system health Participate in and lead incident response, performing deep dive root cause analysis, and driving actionable blameless postmortems to prevent recurrence Mentor and provide technical guidance to other SRE and Cloud Engineers, contributing to their growth and the team's overall technical capabilities Research, evaluate, and advocate for new technologies and tools that can improve our operational posture and efficiency Contribute to the strategic planning and roadmap development for the SRE and Cloud Engineering functions Enhance existing services and applications to increase availability, reliability, and scalability in a microservices environment Build and improve engineering tooling, process, and standards to enable faster, more consistent, more reliable, and highly repeatable application delivery What you should bring: Technical Leadership: Lead complex technical projects and mentor engineers Communication: Articulate complex technical concepts clearly SRE Expertise: Apply SRE principles (SLIs, SLOs, error budgets) in production Distributed Systems: Understand and troubleshoot complex issues in distributed systems Monitoring & Alerting: Design and optimize monitoring, logging, and alerting systems (e.g., Datadog, Prometheus) Cloud Mastery (AWS): Expert-level knowledge of AWS services (e.g., EC2, S3, EKS) Infrastructure as Code (Terraform): Master IaC for cloud infrastructure management Containerization: Strong experience with Docker and Kubernetes in production Automation: Bias for automation and building self-healing systems Problem Solving: Exceptional analytical and problem-solving skills, proactively identifying bottlenecks Technologies we use: AWS MySQL, DynamoDB, Redis GitHub Actions for CI pipelines Kubernetes (specifically EKS) Ambassador, Helm, Argo CD, LinkerD REST, gRPC, graphQL React, Redux, Swift, Node.js, Kotlin, Java, Go, Python Datadog, Prometheus Who we are: It takes a special team to aim for a never-been-done-before mission like ours. We’re looking for people who love working together because they know it makes us stronger, people who look to others and ask, “How can I help?” and then “How can we make this even better?” If you’re ready to roll up your sleeves and help parents raise a financially smart generation, apply to join our team. Greenlight is an equal opportunity employer and will not discriminate against any employee or applicant based on age, race, color, national origin, gender, gender identity or expression, sexual orientation, religion, physical or mental disability, medical condition (including pregnancy, childbirth, or a medical condition related to pregnancy or childbirth), genetic information, marital status, veteran status, or any other characteristic protected by federal, state or local law. Greenlight is committed to an inclusive work environment and interview experience. If you require reasonable accommodations to participate in our hiring process, please reach out to your recruiter directly or email recruiting@greenlight.me.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Big Data Engineer (AWS-Scala Specialist) Location: Greater Noida/Hyderabad Experience: 5-10 Years About the Role- We are seeking a highly skilled Senior Big Data Engineer with deep expertise in Big Data technologies and AWS Cloud Services. The ideal candidate will bring strong hands-on experience in designing, architecting, and implementing scalable data engineering solutions while driving innovation within the team. Key Responsibilities- Design, develop, and optimize Big Data architectures leveraging AWS services for large-scale, complex data processing. Build and maintain data pipelines using Spark (Scala) for both structured and unstructured datasets. Architect and operationalize data engineering and analytics platforms (AWS preferred; Hortonworks, Cloudera, or MapR experience a plus). Implement and manage AWS services including EMR, Glue, Kinesis, DynamoDB, Athena, CloudFormation, API Gateway, and S3. Work on real-time streaming solutions using Kafka and AWS Kinesis. Support ML model operationalization on AWS (deployment, scheduling, and monitoring). Analyze source system data and data flows to ensure high-quality, reliable data delivery for business needs. Write highly efficient SQL queries and support data warehouse initiatives using Apache NiFi, Airflow, and Kylo. Collaborate with cross-functional teams to provide technical leadership, mentor team members, and strengthen the data engineering capability. Troubleshoot and resolve complex technical issues, ensuring scalability, performance, and security of data solutions. Mandatory Skills & Qualifications- ✅ 5+ years of solid hands-on experience in Big Data Technologies (AWS, Scala, Hadoop and Spark Mandatory) ✅ Proven expertise in Spark with Scala ✅ Hands-on experience with: AWS services (EMR, Glue, Lambda, S3, CloudFormation, API Gateway, Athena, Lake Formation) Share your resume at Aarushi.Shukla@coforge.com if you have experience with mandatory skills and you are an early.

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: CloudOps Engineer Who we are: Acqueon's conversational engagement software lets customer-centric brands orchestrate campaigns and proactively engage with consumers using voice, messaging, and email channels. Acqueon leverages a rich data platform, statistical and predictive models, and intelligent workflows to let enterprises maximize the potential of every customer conversation. Acqueon is trusted by 200 clients across industries to increase sales, drive proactive service, improve collections, and develop loyalty. At our core, Acqueon is a customer-centric company with a burning desire (backed by a suite of awesome, AI-powered technology) to help businesses provide friction-free, delightful, and referral-worthy customer experiences. Position Overview We are seeking a highly skilled CloudOps Engineer with expertise in Amazon Web Services (AWS) to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining cloud infrastructure, SaaS Applications, ensuring high availability, scalability, and security. You will work collaboratively with development, operations, and security teams to automate deployment processes, optimize system performance, and drive operational excellence. As a Cloud Engineer in Acqueon you will need…. Ensure the highest uptime for customers in our SaaS environment Provision Customer Tenants & Manage SaaS Platform, Memos to the Staging and Production Environments Infrastructure Management: Design, deploy, and maintain secure and scalable AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, and CloudFormation. Monitoring & Incident Response: Set up monitoring solutions (e.g., CloudWatch, Grafana) to detect, respond, and resolve issues quickly, ensuring uptime and reliability. Cost Optimization: Continuously monitor cloud usage and implement cost-saving strategies such as Reserved Instances, Spot Instances, and resource rightsizing. Backup & Recovery: Implement robust backup and disaster recovery solutions using AWS tools like AWS Backup, S3, and RDS snapshots. Security Compliance: Configure security best practices, including IAM policies, security groups, and encryption, while adhering to organizational compliance standards. Infrastructure as Code (IaC): Use Terraform, CloudFormation, or AWS CDK to provision, update, and manage infrastructure in a consistent and repeatable manner. Automation & Configuration Management: Automate manual processes and system configurations using Ansible, Python, or shell scripting. Containerization & Orchestration: Manage containerized applications using Docker and Kubernetes (EKS) for scaling and efficient deployment. Skills & Qualifications: 2-5 years of experience in Cloud Operations, Infrastructure Management, or DevOps Engineering. Deep expertise in AWS services (EC2, S3, RDS, VPC, Lambda, IAM, CloudFormation, etc.). Strong experience with Terraform for infrastructure provisioning and automation. Proficiency in scripting with Python, Bash, or PowerShell for cloud automation. Hands-on experience with monitoring and logging tools (AWS CloudWatch, Prometheus, Datadog, ELK Stack, etc.). Strong understanding of networking concepts, security best practices, IAM policies, and role-based access control (RBAC). Experience troubleshooting SaaS application performance, system reliability, and cloud-based service disruptions. Familiarity with containerization technologies (Docker, Kubernetes, AWS ECS, or EKS). Willingness to work in a 24/7 operational environment with rotational shifts. Preferred Qualifications: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer). Experience with hybrid cloud environments and on-premises-to-cloud migrations. Familiarity with other cloud platforms like Azure or GCP. Knowledge of database management (e.g., RDS, DynamoDB) and caching solutions (e.g., Redis, ElastiCache). This is an excellent opportunity for those seeking to continue to build upon their existing skills. The right individual will be self-motivated and a creative problem solver. You should possess the ability to seek out the correct information efficiently through individual efforts and with the team. By joining the Acqueon team, you can enjoy the benefits of working for one of the industry's fastest growing and highly respected technology companies. If you, or someone you know, would be a great fit for us we would love to hear from you today! Use the form to apply today or submit your resume.

Posted 3 days ago

Apply

162.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – We are seeking an experienced Backend Developer to be part of the development of high-scalable applications on AWS cloud-native architecture. The ideal candidate will be part of a high performing team with a strong background in Node.js, serverless programming, and Infrastructure as Code (IaC) using Terraform. You will be responsible for translating business requirements into robust technical solutions, ensuring high-quality code, and fostering a culture of technical excellence within the team. Job Title - Sr Technical Lead Location: All Birlasoft Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities: Lead the design, development, and implementation of highly scalable and resilient backend applications using Node.js, TypeScript, and Express.js . Architect and build serverless solutions on AWS, leveraging services like AWS Lambda, API Gateway , and other cloud-native technologies. Utilize Terraform extensively for defining , provisioning, and managing AWS infrastructure as code, ensuring repeatable and consistent deployments. Collaborate closely with product managers, solution architects, and other engineering teams to capture detailed requirements and translate them into actionable technical tasks. Identify and proactively resolve technical dependencies and roadblocks. Design and implement efficient data models and integrate with NoSQL databases, specifically DynamoDB , ensuring optimal performance and scalability. Implement secure authentication and authorization mechanisms, including Single Sign-On (SSO) and integration with Firebase for user management. Ensure adherence to security best practices, coding standards, and architectural guidelines throughout the development lifecycle. Experience in using unit testing and test-driven development (TDD) methodologies to ensure code quality, reliability, and maintainability. Conduct code reviews, provide constructive feedback, and mentor junior and mid-level developers to elevate the team's technical capabilities. Contribute to the continuous improvement of our development processes, tools, and best practices. Stay abreast of emerging technologies and industry trends, particularly in the AWS cloud and Node.js ecosystem, and evaluate their applicability to our projects. Required Technical Skills: Node.js & JavaScript​ :Expert-level proficiency in Node.js, JavaScript (ES6+), and TypeScrip t .Frameworks :Strong experience with Express.js for building robust APIs .Serverless Programming :In-depth knowledge and hands-on experience with AWS Lambda and serverless architecture .Experience with designing and developing microservices architectures .Knowledge of Terraform for deployment of Lambda functions .AWS Cloud Native :Extensive experience designing and implementing solutions leveraging various AWS services (e.g., API Gateway, S3, SQS, SNS, CloudWatch, IAM) .Databases :Strong integration experience with DynamoDB, including data modeling and query optimization .Authentication :Hands-on experience with Single Sign-On (SSO) implementation and Firebase integration .Testing :Solid understanding and practical experience with unit testing frameworks (e.g., Jest, Mocha) and test automation .Desired Skills & Experience :A Bachelor's or Master's degree in Computer Science, Engineering, or a closely related discipline .Experience with CI/CD pipelines for automated deployment of serverless applications .Familiarity with containerization technologies (e.g., Docker) is a plus .Strong understanding of security principles and best practices in cloud environments .

Posted 3 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Hello Visionary! We know that the only way a business thrive is if our people are growing. That’s why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you? We’re looking for a Full stack Developer(Angular + Python) You’ll make a difference by: Developing and testing high-quality, object-oriented software solutions using Python (FastAPI, Plotly), Angular, TypeScript, and CSS Building and deploying applications in a containerized cloud environment (Docker, Kubernetes, AWS) Working in Agile SAFe teams to implement features, conduct code reviews, and ensure quality delivery Contributing to GitLab CI/CD pipelines and supporting continuous deployment practices Collaborating with international teams to deliver impactful visualization tools for sustainability and performance monitoring You’ll win us over by: Having An engineering degree B.E/B.Tech/MCA/M.Tech/M.Sc with good academic record. 3-6 Years of Experience. Having 3+ years of software development experience with 2+ years in AWS cloud-native app development Bringing solid hands-on experience in Python, Angular, TypeScript, and CSS Familiarity with GitLab CI/CD, Docker, Kubernetes, and AWS Willingness to learn and contribute in other tech stacks (e.g., C#, Go) Strong commitment to clean code, agile practices, and technical excellence Proficiency with tools like Eclipse, IntelliJ, or Visual Studio Code Excellent written and verbal communication skills Bonus Points For: Experience with AWS services (Lambda, DynamoDB, API Gateway, SQS/SNS, CloudWatch) and Terraform Background in test automation, TDD, profiling, and code refactoring Familiarity with Git, SonarQube, and Agile ceremonies (sprint planning, retros, reviews) International collaboration experience in distributed agile teams Certifications in Python, Go, or C# technologies What You’ll Gain: Collaborate with global product teams with 20+ years of technical excellence. Work in a disciplined SAFe Agile environment that values both delivery and work-life balance. Make meaningful contributions to product success in a transparent and empowering culture. Build scalable platforms that support sophisticated modular applications. Create a better #TomorrowWithUs! This role, based in Chennai, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at: www.siemens.com/careers

Posted 3 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

PayPay India is looking for a Backend engineer to work on our payment system to deliver the best payment experience for our customers. Responsibilities Design large-scale systems with high complexity to support our high-throughput applications. Understand how to leverage infrastructure for solving such large-scale problems. Develop tools and contribute to open source wherever possible. Adopt problem solving as a way of life - always go to the root cause! Support the code you write in production. Requirements Tech Stack: Java, Kotlin, Scala, Spring Boot, JUnit, Resilience4j, Feign, MySQL/AuroraDB, DynamoDB, ELK, Kafka, Redis, TiDB, Docker, Kubernetes, ArgoCD, AWS, GCP, GitHub, IntelliJ, Gradle, Maven, npm/yarn, Flyway, Jenkins, Snyk, BigQuery, Kibana, Spark, PlantUML, draw.io, Miro.com, Slack, Zoom. 6 years of experience having excellent skills in Java and any other generalized programming language, such as Scala, Python, or Go. Interest and ability to learn other coding languages as needed. Experience with SQL and NoSQL databases, along with distributed cache. Strong fundamentals in data structures, algorithms, and object-oriented programming. In-depth understanding of concurrency and distributed computing. Experience implementing platform components such as RESTful APIs, Pub/Sub Systems, and Database Clients. Experience with microservices. Experience designing high-traffic systems. Degree in Computer Engineering or Computer Science, or 5+ years equivalent experience in SaaS platform development. Business of English or Japanese. Preferred Qualifications Experience in working on system development in finance, payment, or similar industries. Language ability in Japanese and English is a plus (We have a professional translator, but it is nice to have language skills). Experience with AWS services. This job was posted by Tanu Jha from PayPay India.

Posted 3 days ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Role - Data Analytics Architect Exp . - 10+ years Location - PAN India (Prefer - Thane, Mumbai, Hyderabad) Required Technical Skill Set - Snowflake Desired Competencies (Technical/Behavioural Competency): Experience in architecture definition, design and implementation of data lake solutions on Azure/ AWS/ Snowflake . Designs and models Data lake architecture, implements standards, best practices and processes to improve the management of information and data throughout its lifecycle across this platform. Design and implement data engineering, ingestion and curation functions on data lake using native components or custom programming (Azure/ AWS). Proficient in tools/ technologies – Azure (Azure data factory, synapse, ADLS, Data bricks etc.)/ AWS (Redshift, S3, Glue, Athena, DynamoDB etc.)/ Snowflake technology stack/ Talend/ Informatica Analyse data requirements, application and processing architectures, data dictionaries and database schema(s). Analyzes complex data systems and documents data elements, data flow, relationships, and dependencies. Collaborates with Infrastructure and Security Architects to ensure alignment with Enterprise standards and designs Data Modelling, Data Warehousing, Dimensional Modelling, Data Modelling for Big Data & Metadata Management. Knowledge on Data catalogue tools, metadata management and data quality management Experience in design & implementation of Dashboards using tools like Power BI, Qlik etc. Strong oral and written Communication skills. Good presentation skills Analytical Skills Business orientation & acumen (exposure) Advisory experience, to be able to position or seen as an expert Willingness to travel internationally, collocate with clients for short or long term Basic knowledge of advanced analytics. Exposure to leveraging Artificial intelligence and Machine Learning for analysis of complex and large datasets. Tools like Python/ Scala etc. Responsibilities Executing various consulting & implementation engagements for Data lake solutions Data integration, Data modelling, data delivery, statistics, analytics and math Identify right solutions to business problems Learn and Leverage tools/ technologies and product solutions in Data & Analytics area Implement advanced analytics, cognitive analytics models Support RFPs by providing business perspective, participate in RFP discussions, coordination within support groups in TCS Conduct business research and demonstrate thought leadership through analyst engagements, white papers and participation in industry focus areas

Posted 3 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are hiring for one the IT product-based company Job Title : - Staff Software Engineer Exp -8+ years Location : - Gurgaon/Pune Work Mode : - Hybrid Skills : - Java Spring Boot AWS Lambda, S3, DynamoDB, RDS (MySQL) Infrastructure as Code- Terraform DevOps and CI/CD : Expertise in building CI/CD pipelines using tools like Azure DevOps, Jenkins, or AWS CodePipeline for deployment automation.

Posted 3 days ago

Apply

5.0 years

0 - 0 Lacs

India

Remote

Luxury Presence is the leading digital platform revolutionizing the real estate industry for agents, teams, and brokerages. Our award-winning websites, cutting-edge marketing solutions, and AI-powered mobile platform empower real estate professionals to grow their business, operate more efficiently, and deliver exceptional service to their clients. Trusted by over 80,000 real estate professionals, including 31 of the nation’s 100 top-performing agents as published in the Wall Street Journal, Luxury Presence continues to set the standard for innovation and excellence in real estate technology. About The Role We’re seeking a Senior ML-focused Data Platform Engineer to strengthen our MLS data platform team. You will build robust data pipelines and deliver advanced ML solutions—embeddings, fine-tuning, retrieval-augmented generation (RAG), and reinforcement learning from user feedback. Your work powers property discovery, personalized recommendations, conversational agents, and the evaluation infrastructure that keeps them improving. Who is the Data Squad? We make sure clean, reliable MLS listing records and user click-stream data is always available to our products and customers. Our team—a mix of data engineers and software engineers—owns the entire listing pipeline: ingestion, transformation, and normalization across 400+ MLS feeds and other sources. We also extend the platform to capture user-activity data for new features and build AI agents that automate feed onboarding and listing-issue triage, reducing manual effort for internal teams and clients and shortening the path from data to business impact. Strategic Projects (Year 1) Architect our foundational home-ranking and recommendation platforms to incorporate advanced deep-learning and transformer-based models, dramatically accelerating experimentation across every user touchpoint Autonomous MLS AI agents: Launch AI agents that reduce time to onboard new MLS listing feeds and triage / resolve listing issues using structured and unstructured data What You’ll Do Architect and operate scalable batch & streaming data systems (Spark/EMR, Kafka, SQL) Own data quality, performance, and reliability through automated testing and monitoring Design transactional and analytical data models and feature stores Fine-tune embeddings and ML models; deploy RAG‐ and RL-based ranking pipelines Integrate and optimize LLMs for conversational agents Evolve the evaluation stack (A/B, offline metrics, model monitoring) to track impact end-to-end Collaborate with product, engineering, and business stakeholders; mentor peers; shape the long-term ML data-platform strategy Our Tech Stack Python, Spark Streaming, Kafka, Iceberg, Fast API & NodeJS microservices AWS, EMR, Kubernetes, Airflow Postgres, DynamoDB, Athena, ElasticSearch, LanceDB Qualifications (Required) BS/MS in Computer Science or related field, or equivalent experience 5+ years building large-scale data pipelines on AWS or GCP with Spark/EMR, Kafka, and SQL Strong with a backend programming language like Python or Java; production experience with TensorFlow or PyTorch Delivered ML-powered features in recommendations, search/ranking, or conversational AI Hands-on with embeddings, RAG, and reinforcement learning from feedback Familiar with vector databases, LLM deployment, Kubernetes/EKS, and modern CI/CD Excellent communicator who drives results across product, engineering, and business teams Preferred Proven wins building large-scale ranking or personalization platforms. Experience with integrating Large Language Models in production systems Experience leading projects and mentoring engineers Proven success working in Agile environments Join us in shaping the future of real estate The real estate industry is in the midst of a seismic shift, and the future belongs to those who break new ground. As one of the fastest-growing companies in the proptech and marketing sectors, Luxury Presence challenges the status quo of what technology can do for real estate agents, leaders, and brokerages. We’re a team of agile and tenacious innovators working collaboratively to drive the industry forward. Together, we build game-changing products that empower modern real estate entrepreneurs to dominate their markets. From award-winning web design to agile SEO solutions to cutting-edge AI tools, we deliver tech that anticipates market shifts and keeps our clients ahead of their competition. Founded in 2016 by Stanford Business School alum Malte Kramer, Luxury Presence has grown to a global team ranked on the Inc. 5000 fastest-growing companies list three years in a row. We’re backed by world-class investors, including Bessemer Venture Partners, Toba Capital, and Switch Ventures, and have raised $52.6 million to date. More than 15,000 real estate businesses rely on our platform, including 31 of the RealTrends top 100 agents featured in The Wall Street Journal. Additionally, many of the industry’s most powerful brokerages — including Compass, Coldwell Banker, and Sotheby’s International Realty — rely on Luxury Presence as a trusted business partner. Every year since 2020, Luxury Presence has ranked on BuiltIn’s Best Place to Work lists. HousingWire named our founder and CEO a 2024 Tech Trendsetter, we’ve received several Tech100 Awards, and our lead nurturing tool just scored an Inman Innovation Award for Best AI-Powered Platform. Luxury Presence is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, or national origin. Working Hours - 3pm IST to 11pm IST

Posted 3 days ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Job Title: Full Stack Developer Location:Noida[remote] Experience: 4–10 years Department:Global Technology Reports to: Senior Manager Role Overview We are looking for a talented Full Stack Developer with deep expertise in backend and frontend development using modern JavaScript and Python stacks. You will architect, build, and deliver scalable web applications and APIs, collaborate closely with cross-functional teams, and drive engineering best practices in a cloud-native environment. Key Responsibilities Design, develop, test, and deploy robust backend services and APIs using Node.js, Express.js, TypeScript, and Python. Build intuitive, responsive, and performant frontends using React.js and Next.js. Implement and maintain data storage solutions (SQL/NoSQL) and integrate third-party services. Ensure application security, performance, and scalability using best engineering practices. Work with cloud platforms (AWS, Azure, GCP) for application deployment, monitoring, and scaling. Collaborate with Product, Design, and QA to deliver seamless user experiences. Write clean, maintainable, and well-documented code following industry standards. Participate in code reviews, architecture discussions, and process improvements. Troubleshoot, debug, and optimize application performance. Stay current with emerging technologies, trends, and best practices in full stack and cloud development. Required Skills & Experience 4–10 years of professional experience as a Full Stack Developer or similar role. Strong proficiency in Node.js, Express.js, TypeScript, JavaScript (ES6+), and Python. Hands-on experience with React.js and Next.js for frontend development. Experience building RESTful APIs, microservices, and serverless architectures. Good understanding of SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases. Solid knowledge of cloud platforms (AWS, Azure, or GCP) for deploying and managing web applications. Familiarity with CI/CD, containerization (Docker), and infrastructure as code (Terraform, CloudFormation) is a plus. Experience with version control systems (Git) and modern development workflows. Strong problem-solving, debugging, and analytical skills. Excellent communication and teamwork abilities. Preferred Qualifications Experience with GraphQL, WebSockets, or real-time applications. Familiarity with DevOps practices and site reliability engineering. Exposure to testing frameworks (Jest, Mocha, Cypress) and automation tools. Previous work in Agile/Scrum teams.

Posted 3 days ago

Apply

9.0 - 12.0 years

2 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Design and implement Infrastructure/Cloud Architecture for a small/mid size projects Outcomes: Design and implement the architecture for the projects Guide and review technical delivery by project teams Provide technical expertise to other projects Measures of Outcomes: # of reusable components / processes developed # of times components / processes reused Contribution to technology capability development (e.g. Training Webinars Blogs) Customer feedback on overall technical quality (zero technology related escalations) Relevant Technology certifications Business Development (# of proposals contributed to # Won) # white papers/document assets published / working prototypes Outputs Expected: Solution Definition and Design: Define Architecture for the small/mid-sized kind of project Design the technical framework and implement the same Present the detailed design documents to relevant stakeholders and seek feedback Undertake project specific Proof of Concepts activities to validate technical feasibility with guidance from the senior architect Implement best optimized solution and resolve performance issues Requirement gathering and Analysis: Understand the functional and non-functional requirements Collect non-functional requirements (such as response time throughput numbers user load etc.) through discussions with SMEs business users Identify technical aspects as part of story definition especially at an architecture / component level Project Management Support: Share technical inputs with Project Managers/ SCRUM Master Help SCRUM Masters / project managers to understand the technical risks and come up with mitigation strategies Help Engineers and Analysts overcome technical challenges Technology Consulting: Analysis of technology landscape process tools based on project objectives Business and Technical Research: Understand Infrastructure architecture and its' criticality to: analyze and assess tools (internal/external) on specific parameters Understand Infrastructure architecture and its criticality to Support Architect/Sr. Architect in drafting recommendations based on findings of Proof of Concept Understand Infrastructure architecture and its criticality to: analyze and identify new developments in existing technologies (e.g. methodologies frameworks accelerators etc.) Project Estimation: Provide support for project estimations for business proposals and support sprint level / component level estimates Articulate estimation methodology module level estimations for more standard projects with focus on effort estimation alone Proposal Development: Contribute to proposal development of small to medium size projects from technology/architecture perspective Knowledge Management & Capability Development:: Conduct technical trainings/ Webinars to impart knowledge to CIS / project teams Create collaterals (e.g. case study business value documents summary etc.) Gain industry standard certifications on technology and architecture consulting Contribute to knowledge repository and tools Creating reference architecture model reusable components from the project Process Improvements / Delivery Excellence: Identify avenues to improve project delivery parameters (e.g. productivity efficiency process security. etc.) by leveraging tools automation etc. Understand various technical tools used in the project (third party as well as home-grown) to improve efficiency productivity Skill Examples: Use Domain/ Industry Knowledge to understand business requirements create POC to meet business requirements under guidance Use Technology Knowledge to analyse technology based on client's specific requirement analyse and understand existing implementations work on simple technology implementations (POC) under guidance guide the developers and enable them in the implementation of same Use knowledge of Architecture Concepts and Principles to provide inputs to the senior architects towards building component solutions deploy the solution as per the architecture under guidance Use Tools and Principles to create low level design under guidance from the senior Architect for the given business requirements Use Project Governance Framework to facilitate communication with the right stakeholders and Project Metrics to help them understand their relevance in project and to share input on project metrics with the relevant stakeholders for own area of work Use Estimation and Resource Planning knowledge to help estimate and plan resources for specific modules / small projects with detailed requirements in place Use Knowledge Management Tools and Techniques to participate in the knowledge management process (such as Project specific KT) consume/contribute to the knowledge management repository Use knowledge of Technical Standards Documentation and Templates to understand and interpret the documents provided Use Solution Structuring knowledge to understand the proposed solution provide inputs to create draft proposals/ RFP (including effort estimation scheduling resource loading etc.) Knowledge Examples: Domain/ Industry Knowledge: Has basic knowledge of standard business processes within the relevant industry vertical and customer business domain Technology Knowledge: Has deep working knowledge on the one technology tower and gain more knowledge in Cloud and Security Estimation and Resource Planning: Has working knowledge of estimation and resource planning techniques Has basic knowledge of industry knowledge management tools (such as portals wiki) UST and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT) Technical Standards Documentation and Templates: Has basic knowledge of various document templates and standards (such as business blueprint design documents etc) Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for (non-functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard and requirements management tools (e.g.MS Excel) Additional Comments: JD Role Overview We’re seeking an AWS Certified Solutions Architect with strong Python and familiarity with .NET ecosystems to lead an application modernization effort. You will partner with cross-functional development teams to transform on-premises, monolithic .NET applications into a cloud-native, microservices-based architecture on AWS. ________________________________________ Key Responsibilities • Architect & Design: o Define the target state: microservices design, domain-driven boundaries, API contracts. o Choose AWS services (EKS/ECS, Lambda, State Machines/Step Functions, API Gateway, EventBridge, RDS/DynamoDB, S3, etc.) to meet scalability, availability, and security requirements. • Modernization Roadmap: o Assess existing .NET applications and data stores; identify refactoring vs. re-platform opportunities. o Develop a phased migration strategy • Infrastructure as Code: o Author and review CloudFormation. o Establish CI/CD pipelines (CodePipeline, CodeBuild, GitHub Actions, Jenkins) for automated build, test, and deployment. • Development Collaboration: o Mentor and guide .NET and Python developers on containerization (Docker), orchestration (Kubernetes/EKS), and serverless patterns. o Review code and design patterns to ensure best practices in resilience, observability, and security. • Security & Compliance: o Ensure alignment with IAM roles/policies, VPC networking, security groups, and KMS encryption strategies. o Conduct threat modelling and partner with security teams to implement controls (WAF, GuardDuty, Shield). • Performance & Cost Optimization: o Implement autoscaling, right-sizing, and reserved instance strategies. o Use CloudWatch, X-Ray, Elastic Stack and third-party tools to monitor performance and troubleshoot. • Documentation & Knowledge Transfer: o Produce high-level and detailed architecture diagrams, runbooks, and operational playbooks. o Lead workshops and brown-bags to upskill teams on AWS services and cloud-native design. o Drive day to day work to the 24 by 7 IOC Team. ________________________________________ Must-Have Skills & Experience • AWS Expertise: o AWS Certified Solutions Architect – Associate or Professional o Deep hands-on with EC2, ECS/EKS, Lambda, API Gateway, RDS/Aurora, DynamoDB, S3, VPC, IAM • Programming: o Proficient in Python for automation, Lambdas, and microservices. o Working knowledge of C#/.NET Core for understanding legacy applications and guiding refactoring. • Microservices & Containers: o Design patterns (circuit breaker, saga, sidecar). o Containerization (Docker), orchestration on Kubernetes (EKS) or Fargate. • Infrastructure as Code & CI/CD: o CloudFormation, AWS CDK, or Terraform. o Build/test/deploy pipelines (CodePipeline, CodeBuild, Jenkins, GitHub Actions). • Networking & Security: o VPC design, subnets, NAT, Transit Gateway. o IAM best practices, KMS, WAF, Security Hub, GuardDuty. • Soft Skills: o Excellent verbal and written communication. o Ability to translate complex technical concepts to business stakeholders. o Proven leadership in agile, cross-functional teams. ________________________________________ Preferred / Nice-to-Have • Experience with service mesh (AWS App Mesh, Istio). • Experience with Non-Relational DBs (Neptune, etc.). • Familiarity with event-driven architectures using EventBridge or SNS/SQS. • Exposure to observability tools: CloudWatch Metrics/Logs, X-Ray, Prometheus/Grafana. • Background in migrating SQL Server, Oracle, or other on-prem databases to AWS (DMS, SCT). • Knowledge of serverless frameworks (Serverless Framework, SAM). • Additional certifications: AWS Certified DevOps Engineer, Security Specialty. ________________________________________ Skills Python,Aws Cloud,Aws Administration About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 days ago

Apply

6.0 - 8.0 years

5 - 8 Lacs

Thiruvananthapuram

On-site

6 - 8 Years 1 Opening Trivandrum Role description Job Overview: We are looking for a Backend Developer with 6–8 years of experience in backend development and cloud integration. This role focuses on the design and development of RESTful APIs , backend services, and seamless integration with AWS cloud infrastructure . The ideal candidate should be detail-oriented, capable of multitasking, and able to work effectively in a fast-paced, Agile environment. Key Responsibilities: Design, develop, and maintain high-performance RESTful APIs using TypeScript, Node.js, and Python. Provide L3 support for complex production issues, including root cause analysis and resolution. Optimize performance for both SQL and NoSQL database queries. Integrate and manage various AWS services (Lambda, API Gateway, DynamoDB, SNS, SQS, S3, IAM). Implement secure API access using OAuth , JWT , and related security protocols. Collaborate with front-end teams for end-to-end application development. Participate in code reviews, Agile ceremonies, and sprint planning. Document incidents, resolutions, and provide technical guidance to peers. Mandatory Skills: Languages/Frameworks: TypeScript Node.js Python Cloud & DevOps: Hands-on with AWS services: Lambda, API Gateway, DynamoDB, SQS, SNS, IAM, S3, CloudWatch Experience with serverless architecture API Development: Strong experience in RESTful API design and development Knowledge of API security best practices (OAuth, JWT) Databases: Proficiency with SQL (MySQL, PostgreSQL) and NoSQL (DynamoDB) Version Control: Git and Git-based workflows (GitHub, GitLab) Problem Solving & Support: Proven experience in L3 support , debugging, and issue resolution Secondary Skills (Good to Have): AWS Certification (Developer Associate / Solutions Architect) Experience with Swagger/OpenAPI , AWS X-Ray Knowledge of CI/CD pipelines Understanding of OOP, MVC , and web standards Familiarity with Agile / Scrum methodologies Soft Skills: Excellent verbal and written communication skills Strong analytical and problem-solving abilities Ability to work collaboratively in cross-functional teams Skills Node.Js,Restful Apis,Aws,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 3 days ago

Apply

5.0 - 10.0 years

10 Lacs

Hyderābād

On-site

Position: Full stack Developer Contract-12 months Location: Hyderabad-HITEC City(Hybrid-3 days to office) What You’ll Do• Develop and maintain robust, scalable web applications using Java forbackend and React/TypeScript for frontend Design and implement APIs (RESTful and GraphQL) and microservices based architectures Work with relational and non-relational databases such as PostgreSQL,MySQL, and MongoDB Leverage AWS services like EC2, S3, Lambda, API Gateway, RDS, and DynamoDB for cloud-based deployments Utilize CI/CD tools and Git workflows for efficient development and deployment Optimize applications for performance, scalability, and security Collaborate with DevOps teams to manage infrastructure using Terraform or AWS Cloud Formation Participate in Agile/Scrum ceremonies and contribute to sprint planning ,reviews, and retrospectives Work closely with product managers, designers, and QA to ensure high quality feature delivery Troubleshoot and resolve production issues, ensuring minimal downtime and impact What You Bring 5–10 years of strong experience in full stack software development Proficiency in frontend technologies: JavaScript, TypeScript, React Strong backend experience using Core Java and Spring Boot Solid understanding of SQL and NoSQL databases: PostgreSQL, MySQL, MongoDB Deep knowledge of AWS services: EC2, S3, Lambda, API Gateway, RDS, DynamoDB Experience building and consuming RESTful APIs and working with Graph Q L• Familiarity with microservices architecture and containerized environments(Docker, Kubernetes) Working knowledge of Git, CI/CD pipelines, and modern development work flows Understanding of cloud security, scalability, and performance optimization Hands-on experience with Infrastructure as Code tools like Terraform or AWS Cloud Formation Excellent problem-solving skills, attention to detail, and a collaborative mindset Experience working in Agile/Scrum environments and across time zones. Job Types: Full-time, Contractual / Temporary Contract length: 12 months Pay: Up to ₹1,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 3 days ago

Apply

5.0 years

5 - 10 Lacs

Gurgaon

On-site

Manager EXL/M/1435552 ServicesGurgaon Posted On 28 Jul 2025 End Date 11 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D013514 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Analytics - UK & Europe SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill JAVA HTML Minimum Qualification B.COM Certification No data available Job Description Job Description: Senior Full Stack Developer Position: Senior Full Stack Developer Location: Gurugram Relevant Experience Required: 8+ years Employment Type: Full-time About the Role We are looking for a Senior Full Stack Developer who can build end-to-end web applications with strong expertise in both front-end and back-end development. The role involves working with Django, Node.js, React, and modern database systems (SQL, NoSQL, and Vector Databases), while leveraging real-time data streaming, AI-powered integrations, and cloud-native deployments. The ideal candidate is a hands-on technologist with a passion for modern UI/UX, scalability, and performance optimization. Key Responsibilities Front-End Development Build responsive and user-friendly interfaces using HTML5, CSS3, JavaScript, and React. Implement modern UI frameworks such as Next.js, Tailwind CSS, Bootstrap, or Material-UI. Create interactive charts and dashboards with D3.js, Recharts, Highcharts, or Plotly. Ensure cross-browser compatibility and optimize for performance and accessibility. Collaborate with designers to translate wireframes and prototypes into functional components. Back-End Development Develop RESTful & GraphQL APIs with Django/DRF and Node.js/Express. Design and implement microservices & event-driven architectures. Optimize server performance and ensure secure API integrations. Database & Data Management Work with structured (PostgreSQL, MySQL) and unstructured databases (MongoDB, Cassandra, DynamoDB). Integrate and manage Vector Databases (Pinecone, Milvus, Weaviate, Chroma) for AI-powered search and recommendations. Implement sharding, clustering, caching, and replication strategies for scalability. Manage both transactional and analytical workloads efficiently. Real-Time Processing & Visualization Implement real-time data streaming with Apache Kafka, Pulsar, or Redis Streams. Build live features (e.g., notifications, chat, analytics) using WebSockets & Server-Sent Events (SSE). Visualize large-scale data in real time for dashboards and BI applications. DevOps & Deployment Deploy applications on cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, Helm, and Terraform for scalable deployments. Maintain CI/CD pipelines with GitHub Actions, Jenkins, or GitLab CI. Monitor, log, and ensure high availability with Prometheus, Grafana, ELK/EFK stack. Good to have AI & Advanced Capabilities Integrate state-of-the-art AI/ML models for personalization, recommendations, and semantic search. Implement Retrieval-Augmented Generation (RAG) pipelines with embeddings. Work on multimodal data processing (text, image, and video). Preferred Skills & Qualifications Core Stack Front-End: HTML5, CSS3, JavaScript, TypeScript, React, Next.js, Tailwind CSS/Bootstrap/Material-UI Back-End: Python (Django/DRF), Node.js/Express Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB, Vector Databases (Pinecone, Milvus, Weaviate, Chroma) APIs: REST, GraphQL, gRPC State-of-the-Art & Advanced Tools Streaming: Apache Kafka, Apache Pulsar, Redis Streams Visualization: D3.js, Highcharts, Plotly, Deck.gl Deployment: Docker, Kubernetes, Helm, Terraform, ArgoCD Cloud: AWS Lambda, Azure Functions, Google Cloud Run Monitoring: Prometheus, Grafana, OpenTelemetry Workflow Workflow Type Back Office

Posted 3 days ago

Apply

0 years

0 Lacs

Noida

On-site

Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Job Type: Full-time Work Location: In person

Posted 3 days ago

Apply

2.0 years

6 - 10 Lacs

India

On-site

Overview We are looking for an experienced Senior Backend Developer (Node.js) who is passionate about building scalable and efficient backend systems. If you're someone who enjoys solving complex technical problems, writing clean code, and working in agile teams — we'd love to connect with you! Duties Design and develop scalable backend services using Node.js and frameworks like Express, Hapi, or Fastify Build and maintain RESTful APIs for frontend and third-party integrations Work with both SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, DynamoDB) databases Optimize backend performance and write clean, testable code Collaborate with cross-functional teams including frontend, DevOps, and QA Participate in agile/scrum meetings and development sprints Stay up-to-date with best practices and backend trend Skills Minimum 2 years of hands-on Node.js experience Proficiency in backend frameworks (Express, Hapi, Fastify, etc.) Good understanding of REST API design Experience with database management (SQL & NoSQL) Knowledge of unit and integration testing Familiarity with agile methodologies. Nice to Have Experience with AWS services (Lambda, EC2, S3, RDS) Exposure to CI/CD pipelines and DevOps tools We welcome applicants who are passionate about technology and eager to contribute to innovative projects. If you possess the required skills and are looking for an opportunity to grow within a collaborative environment, we encourage you to apply. Job Type: Full-time Pay: ₹600,000.00 - ₹1,000,000.00 per year Work Location: In person

Posted 3 days ago

Apply

1.0 - 3.0 years

5 - 8 Lacs

Noida

On-site

exp loc Responsibilities Develop and build extremely reliable, scalable and high-performing web applications for our clients Review and understand business requirements ensuring that development tasks are completed within the timeline provided and that issues are fully tested with minimal defects Collaborate across the company and interact with our customers to define, design and showcase new concepts and solutions Collaborate with other developers to ensure that client needs are met at all times Work in a rapid and agile development process to enable increased speed to market against a backdrop of appropriate controls Implement good development and testing standards to ensure quality of deliverables Requirements B.Tech/MCA with at least 1-3 years of relevant experience Exposure to MVC frameworks like Spring and ORM tool like Hibernate Excellent understanding of OOPS concepts, microservices and JAVA programming language Programming experience in relational platforms like MySQL, Oracle. Non-relational platforms like DynamoDB/MongoDB (no-sql) would be an add on Knowledge of Javascript, JQuery, HTML, XML would be an added advantage Sound analytical skills and good communication skills Experience with an agile development methodology, preferably Scrum Good to have Experience in cloud computing or Linux Previously involved in a client handling role Proactive self-starter and results oriented Flexible and adaptable with good interpersonal skills

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Urgent Hiring: Back-End Developer Location: Hyderabad Notice period: Immediate Joiner. Job Summary: We are seeking a highly motivated Back-End Developer with a strong foundation in building scalable and robust web applications. The ideal candidate will have extensive experience in back-end development, proficiency in Python, and hands-on experience with cloud services and databases. This role involves designing, developing, and maintaining high-performance back-end systems that power our applications. Key Responsibilities: Design, develop, and maintain scalable back-end systems Build and maintain RESTful APIs that support front-end applications Collaborate with cross-functional teams to define, design, and ship new features Write clean, efficient, and well-documented code Participate in code reviews and ensure high code quality Troubleshoot, debug, and optimize application performance Stay updated with emerging technologies and industry trends Required Skills & Qualifications: 5+ years of experience in back-end development Proficiency in Python and experience with frameworks such as FastAPI, Django, or Flask Experience with both SQL (PostgreSQL) and NoSQL (DynamoDB, MongoDB) databases Experience designing and building RESTful APIs Understanding of fundamental design principles behind a scalable application Solid experience with AWS services (e.g., EC2, S3, RDS, Lambda, API Gateway) Apply Here: anju.n@valuelabs.com

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as our next Senior Machine Learning Engineer (L3) in our Comms Platform Engineering team About The Job This position is needed to scope, design, and deploy machine learning systems into the real world, the individual will closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. Twilio is looking for a Senior Machine Learning engineer to join the rapidly growing Comms Platform Engineering team of our Messaging business unit. You will understand the needs of our customers and build data products that solve their needs at a global scale. Working side by side with other engineering teams and product counterparts, you will own end-to-end execution of ML solutions. To thrive in this role, you must have a background in ML engineering, and a track record of solving data & machine-learning problems at scale. You are a self-starter, embody a growth attitude, and collaborate effectively across the entire Twilio organization Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions in production Train and validate both deep learning-based and statistical-based models considering use-case, complexity, performance, and robustness Demonstrate end-to-end understanding of applications and develop a deep understanding of the “why” behind our models & systems Partner with product managers, tech leads, and stakeholders to analyze business problems, clarify requirements and define the scope of the systems needed Work closely with data platform teams to build robust scalable batch and realtime data pipelines Work closely with software engineers, build tools to enhance productivity and to ship and maintain ML models Drive engineering best practices around code reviews, automated testing and monitoring Qualifications Not all applicants will have skills that match a job description exactly. Twilio values diverse experiences in other industries, and we encourage everyone who meets the required qualifications to apply. While having “desired” qualifications make for a strong candidate, we encourage applicants with alternative experiences to also apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required 5+ years of applied ML experience. Proficiency in Python is preferred. We will also consider strong quantitative candidates with a background in other programming languages Strong background in the foundations of machine learning and building blocks of modern deep learning Track record of building, shipping and maintaining machine learning models in production in an ambiguous and fast paced environment. You have a clear understanding of frameworks like - PyTorch, TensorFlow, or Keras, why and how these frameworks do what they do Familiarity with ML Ops concepts related to testing and maintaining models in production such as testing, retraining, and monitoring. Demonstrated ability to ramp up, understand, and operate effectively in new application / business domains. You’ve explored some of the modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) Experience working in an agile team environment with changing priorities Experience of working on AWS Desired Experience with Large Language Models Location This role will be remote, and based in India (only in Karnataka, TamilNadu, Maharashtra, Telangana and New Delhi). Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Job Overview: We are looking for a Backend Developer with 3–5 years of experience in backend development and cloud integration. This role focuses on the design and development of RESTful APIs , backend services, and seamless integration with AWS cloud infrastructure . The ideal candidate should be detail-oriented, capable of multitasking, and able to work effectively in a fast-paced, Agile environment. Key Responsibilities Design, develop, and maintain high-performance RESTful APIs using TypeScript, Node.js, and Python. Provide L3 support for complex production issues, including root cause analysis and resolution. Optimize performance for both SQL and NoSQL database queries. Integrate and manage various AWS services (Lambda, API Gateway, DynamoDB, SNS, SQS, S3, IAM). Implement secure API access using OAuth, JWT, and related security protocols. Collaborate with front-end teams for end-to-end application development. Participate in code reviews, Agile ceremonies, and sprint planning. Document incidents, resolutions, and provide technical guidance to peers. Mandatory Skills Languages/Frameworks: TypeScript Node.js Python Cloud & DevOps: Hands-on with AWS services: Lambda, API Gateway, DynamoDB, SQS, SNS, IAM, S3, CloudWatch Experience with serverless architecture API Development: Strong experience in RESTful API design and development Knowledge of API security best practices (OAuth, JWT) Databases: Proficiency with SQL (MySQL, PostgreSQL) and NoSQL (DynamoDB) Version Control: Git and Git-based workflows (GitHub, GitLab) Problem Solving & Support: Proven experience in L3 support, debugging, and issue resolution Secondary Skills (Good To Have) AWS Certification (Developer Associate / Solutions Architect) Experience with Swagger/OpenAPI, AWS X-Ray Knowledge of CI/CD pipelines Understanding of OOP, MVC, and web standards Familiarity with Agile / Scrum methodologies Soft Skills Excellent verbal and written communication skills Strong analytical and problem-solving abilities Ability to work collaboratively in cross-functional teams Skills Node.Js,Typescript,Rest Api,Aws

Posted 3 days ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Job Overview: We are looking for a Backend Developer with 6–8 years of experience in backend development and cloud integration. This role focuses on the design and development of RESTful APIs , backend services, and seamless integration with AWS cloud infrastructure . The ideal candidate should be detail-oriented, capable of multitasking, and able to work effectively in a fast-paced, Agile environment. Key Responsibilities Design, develop, and maintain high-performance RESTful APIs using TypeScript, Node.js, and Python. Provide L3 support for complex production issues, including root cause analysis and resolution. Optimize performance for both SQL and NoSQL database queries. Integrate and manage various AWS services (Lambda, API Gateway, DynamoDB, SNS, SQS, S3, IAM). Implement secure API access using OAuth, JWT, and related security protocols. Collaborate with front-end teams for end-to-end application development. Participate in code reviews, Agile ceremonies, and sprint planning. Document incidents, resolutions, and provide technical guidance to peers. Mandatory Skills Languages/Frameworks: TypeScript Node.js Python Cloud & DevOps: Hands-on with AWS services: Lambda, API Gateway, DynamoDB, SQS, SNS, IAM, S3, CloudWatch Experience with serverless architecture API Development: Strong experience in RESTful API design and development Knowledge of API security best practices (OAuth, JWT) Databases: Proficiency with SQL (MySQL, PostgreSQL) and NoSQL (DynamoDB) Version Control: Git and Git-based workflows (GitHub, GitLab) Problem Solving & Support: Proven experience in L3 support, debugging, and issue resolution Secondary Skills (Good To Have) AWS Certification (Developer Associate / Solutions Architect) Experience with Swagger/OpenAPI, AWS X-Ray Knowledge of CI/CD pipelines Understanding of OOP, MVC, and web standards Familiarity with Agile / Scrum methodologies Soft Skills Excellent verbal and written communication skills Strong analytical and problem-solving abilities Ability to work collaboratively in cross-functional teams Skills Node.Js,Restful Apis,Aws,Python

Posted 3 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JD - Data Engineer Pattern values data and the engineering required to take full advantage of it. As a Data Engineer at Pattern, you will be working on business problems that have a huge impact on how the company maintains its competitive edge. Essential Duties And Responsibilities Develop, deploy, and support real-time, automated, scalable data streams from a variety of sources into the data lake or data warehouse. Develop and implement data auditing strategies and processes to ensure data quality; identify and resolve problems associated with large-scale data processing workflows; implement technical solutions to maintain data pipeline processes and troubleshoot failures. Collaborate with technology teams and partners to specify data requirements and provide access to data. Tune application and query performance using profiling tools and SQL or other relevant query languages. Understand business, operations, and analytics requirements for data Build data expertise and own data quality for assigned areas of ownership Work with data infrastructure to triage issues and drive to resolution Required Qualifications Bachelor’s Degree in Data Science, Data Analytics, Information Management, Computer Science, Information Technology, related field, or equivalent professional experience Overall experience should be more than 4 + years 3+ years of experience working with SQL 3+ years of experience in implementing modern data architecture-based data warehouses 2+ years of experience working with data warehouses such as Redshift, BigQuery, or Snowflake and understand data architecture design Excellent software engineering and scripting knowledge Strong communication skills (both in presentation and comprehension) along with the aptitude for thought leadership in data management and analytics Expertise with data systems working with massive data sets from various data sources Ability to lead a team of Data Engineers Preferred Qualifications Experience working with time series databases Advanced knowledge of SQL, including the ability to write stored procedures, triggers, analytic/windowing functions, and tuning Advanced knowledge of Snowflake, including the ability to write and orchestrate streams and tasks Background in Big Data, non-relational databases, Machine Learning and Data Mining Experience with cloud-based technologies including SNS, SQS, SES, S3, Lambda, and Glue Experience with modern data platforms like Redshift, Cassandra, DynamoDB, Apache Airflow, Spark, or ElasticSearch Expertise in Data Quality and Data Governance Our Core Values Data Fanatics: Our edge is always found in the data Partner Obsessed: We are obsessed with partner success Team of Doers: We have a bias for action Game Changers: We encourage innovation About Pattern Pattern is the premier partner for global e-commerce acceleration and is headquartered in Utah's Silicon Slopes tech hub—with offices in Asia, Australia, Europe, the Middle East, and North America. Valued at $2 billion, Pattern has been named one of the fastest-growing tech companies in North America by Deloitte and one of the best-led companies in America by Inc. More than 100 global brands—like Nestle, Sylvania, Kong, Panasonic, and Sorel —rely on Pattern's global e-commerce acceleration platform to scale their business around the world. We place employee experience at the center of our business model and have been recognized as one of America's Most Loved Workplaces®. https://pattern.com/

Posted 3 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

We're Hiring: Capacity Planning Engineer (8+ Yrs Experience) 📍 Location: [Insert Location or Remote] 🧠 Experience: 8+ Years 🕒 Full-Time | Immediate Joiners Preferred Are you passionate about optimizing system performance and designing scalable cloud infrastructure? We’re looking for a Capacity Planning Engineer to join our team! Key Responsibilities: Develop and implement capacity planning strategies for AWS environments , especially EKS . Conduct continuous load testing to identify bottlenecks and improve performance. Collaborate with DevOps and engineering teams to align infrastructure with business needs. Analyze system usage data, forecast future capacity, and recommend solutions. Monitor and optimize cloud resources for performance and cost-efficiency . Create detailed reports on capacity trends and infrastructure health. Essential Skills: Deep expertise in the AWS ecosystem : EKS, EC2, DynamoDB, Lambda, S3, RDS. Experience with Dynatrace (or similar monitoring tools). Strong grasp of container orchestration, microservices , and automated load testing tools (e.g., JMeter, Gatling). Proficiency in Python or Bash for scripting and automation. Excellent analytical and communication skills.

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies