Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 5.0 years
0 Lacs
pune, maharashtra
On-site
As a member of Upcycle Reput Tech Pvt Ltd, a blockchain-based startup committed to sustainability and environmental impact reduction, your primary responsibility will be to assist in deploying, managing, and monitoring cloud infrastructure on AWS. You will support various AWS services such as EC2, S3, RDS, Lambda, IAM, VPC, and CloudWatch, ensuring seamless operations. Your role will involve implementing basic CI/CD pipelines using tools like AWS CodePipeline and CodeBuild to streamline development processes. Furthermore, you will play a crucial role in automating infrastructure using Terraform or CloudFormation and ensuring adherence to security best practices in AWS configurations, including IAM policies and VPC security groups. Familiarity with Docker and Kubernetes for containerized deployments is preferred, along with supporting log analysis and monitoring using tools like CloudWatch, ELK, or Prometheus. Your ability to document system configurations and troubleshooting procedures will be essential in maintaining operational efficiency. To excel in this role, you should possess 1-2 years of hands-on experience with AWS cloud services, a solid understanding of AWS EC2, S3, RDS, Lambda, IAM, VPC, and CloudWatch, as well as basic knowledge of Linux administration and scripting using Bash or Python. Experience with Infrastructure as Code (IaC), CI/CD pipelines, and Git version control is required, along with a grasp of networking concepts and security protocols. Preferred qualifications include AWS certifications like AWS Certified Cloud Practitioner or AWS Certified Solutions Architect - Associate, familiarity with serverless computing and monitoring tools like Prometheus, Grafana, or ELK Stack. Strong troubleshooting skills, effective communication, and the ability to work collaboratively are key attributes for success in this role. In return, Upcycle Reput Tech Pvt Ltd offers a competitive salary and benefits package, the opportunity to engage with exciting and challenging projects, a collaborative work environment, and professional development opportunities. Join us in our mission to integrate sustainable practices into the supply chain and drive environmental impact reduction across industries.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
punjab
On-site
As a Data Engineer Architect, you will play a critical role in leading our data engineering efforts by designing and implementing scalable data solutions that align with our business objectives. Your expertise in Snowflake, data warehousing, and modern data engineering tools will be key in ensuring the success of our data projects. Your responsibilities will include designing robust data architecture solutions, developing and maintaining data pipelines using tools like DBT and Apache Airflow, collaborating with cross-functional teams to gather requirements, optimizing data processes for improved performance, and managing RDBMS databases with a focus on AWS RDS. Additionally, you will utilize Alteryx for data preparation and blending, develop best practices for data governance, security, and quality assurance, and mentor junior data engineers and analysts to foster a culture of continuous learning and improvement. To excel in this role, you should hold a Bachelor's degree in Computer Science or a related field (Master's degree preferred) and have at least 7 years of experience in data engineering with a focus on data architecture and design. Strong expertise in Snowflake, proficiency in building and maintaining data pipelines using DBT and Apache Airflow, solid experience with RDBMS and SQL optimization, familiarity with Alteryx, and knowledge of cloud technologies, especially AWS, are essential. Preferred skills include experience with data visualization tools (e.g., Tableau, Power BI), understanding of data warehousing concepts and best practices, and familiarity with machine learning concepts and tools. Overall, your problem-solving skills, ability to work under pressure in a fast-paced environment, and strong communication skills will be critical in collaborating effectively with technical and non-technical stakeholders to drive the success of our data engineering initiatives. Stay current with industry trends and emerging technologies to continuously enhance your data engineering expertise.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
YASH Technologies is a leading technology integrator that specializes in assisting clients in reimagining operating models, enhancing competitiveness, optimizing costs, fostering exceptional stakeholder experiences, and driving business transformation. At YASH, we are a group of talented individuals working with cutting-edge technologies. Our mission is centered around making real positive changes in an increasingly virtual world, transcending generational gaps and future disruptions. We are currently seeking React Professionals with the following qualifications: - Experience required: 7 to 9 years - Developing new user-facing features using React.js - Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model - Thorough understanding of React.js and its core principles - Experience with popular React.js workflows (such as Redux) - Familiarity with RESTful APIs - Familiarity with modern front-end build pipelines and tools - Familiarity with code versioning tools such as Git - Knowledge of AWS Services like S3, Lambda, EC2, ECS At YASH, you will have the opportunity to shape a career that aligns with your aspirations while collaborating in an inclusive team environment. We embrace career-oriented skilling models and harness our collective intelligence with technology to facilitate continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is guided by four core principles: - Flexible work arrangements, Free spirit, and emotional positivity - Agile self-determination, trust, transparency, and open collaboration - Comprehensive support for achieving business goals - Stable employment with a positive atmosphere and ethical corporate culture,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a member of our team, you will be responsible for the following key tasks: You must possess a strong understanding of Dynatrace and AWS, utilizing this knowledge to effectively manage the production environment by continuously monitoring availability and maintaining a comprehensive view of system health. Demonstrate familiarity with SLA, SLO, and SLI key metrics to ensure adherence to performance standards. Collect and analyze metrics from applications to support performance optimization and troubleshooting efforts. Proficiency in AWS services including API gateway, Lambda, Kibana, Cloudwatch, Dynamo DB, and S3 is necessary. Implement automation practices to develop sustainable systems and services, while also focusing on enhancing reliability without compromising feature development speed. Maintain a balance between feature development speed and reliability by defining clear service-level objectives. Proficiency in tools such as ServiceNow, Dynatrace, and Cloud Support is desirable. About Virtusa: Virtusa is dedicated to fostering teamwork, improving quality of life, and supporting professional and personal development. Join our global team of 27,000 professionals who are committed to your growth, offering exciting projects, opportunities, and exposure to cutting-edge technologies throughout your career. At Virtusa, we bring together great minds and great potential to create a collaborative environment that encourages innovative thinking and excellence. We believe in nurturing new ideas and providing a dynamic platform for individuals to excel.,
Posted 1 week ago
5.0 - 10.0 years
9 - 19 Lacs
hyderabad, chennai, bengaluru
Hybrid
Role & responsibilities Mandatory Skillsets : React Native and React JS, Javascript and Typescript, AWS (Lambda, S3, SMS, SQL SA : 5-9 Years (Relevant - 4 Years is Mandatory) M : 8-12 Years (Relevant - 6 Years is Mandatory) Location Preference : Chennai/Hyderabad React JS - Hybrid Mobile and Web application development experience React Native - Mobile Native application development experience Real time AWS Work Experience is mandatory Good Communication skills Preferred candidate profile NP- Immediate to 15 Days FYI- Apply Only If you are interested in Contract Openings. Reach me at divya.bhat@orcapod.work
Posted 1 week ago
3.0 - 8.0 years
6 - 14 Lacs
gurugram
Work from Office
My linkedin linkedin.com/in/yashsharma1608. contract period - 6-12month payroll will be - ASV consulting , my company client - will disclose after 1 round Job location - Gurgaon - onsite(WFO) budget - upto 1lpa/month , depending on last (relevant hike) Exprnce - 3+ JD is About the Role We are looking for a skilled AWS Data Engineer with strong hands-on experience in AWS Glue and AWS analytics services. The candidate will be responsible for designing, building, and optimizing scalable data pipelines and ETL processes that support advanced analytics and business intelligence requirements. Key Responsibilities Design and develop ETL pipelines using AWS Glue, PySpark, and AWS services (Lambda, Step Functions, S3, etc.). Build and maintain data lakes and warehouses using AWS S3, Athena, Redshift, and Glue Data Catalog. Automate data ingestion and transformation from structured & unstructured sources. Monitor, troubleshoot, and optimize pipeline performance and cost efficiency . Collaborate with analysts, data scientists, and business stakeholders to define requirements. Ensure data governance, quality, and security standards. Document data workflows, schemas, and technical solutions. Required Qualifications 3+ years of experience in data engineering , preferably with cloud platforms. Strong experience in AWS Glue (ETL jobs, crawlers, workflows). Proficiency in PySpark, Python, and SQL . Hands-on with AWS services: S3, Lambda, Step Functions, CloudWatch, IAM, Athena, Redshift, Glue Data Catalog . Knowledge of data warehousing, data modeling, and data lakes . Strong in pipeline orchestration and performance tuning . Familiar with DevOps tools (CI/CD, Git) and Agile methodology . Preferred Qualifications AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect . Experience with streaming data (Kinesis, Kafka).
Posted 1 week ago
8.0 - 12.0 years
15 - 20 Lacs
bengaluru
Work from Office
Qualifications 5+ years of hands-on DBT experience, including model design, testing, and optimization. 8+ years of strong SQL experience, with proven skills in query optimization and database performance tuning. 5+ years of programming experience, including custom DBT macros, scripting, APIs, and AWS integrations using boto3. 3+ years of experience with orchestration tools like Apache Airflow or Prefect. Proven experience with cloud data platforms (Snowflake, Redshift, Databricks, or BigQuery). Hands-on knowledge of AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch). Familiarity with serverless architectures and infrastructure as code (CloudFormation, Terraform). Strong communication skills with the ability to deliver MVPs aligned with sprint timelines. Excellent analytical and problem-solving abilities, with a track record of cross-functional collaboration.
Posted 1 week ago
1.0 - 3.0 years
25 - 40 Lacs
mumbai, delhi / ncr, bengaluru
Work from Office
Frontend Software Engineer - Remote (International) The Company is South America based and is looking for candidates with the below skills in INDIA. Candidates can be anywhere in INDIA. Candidates from IIT, NIT, BITS or any reputed Engineering Firms are welcome. Candidates working in High Growth Startups in INDIA are most welcome. Key Responsibilities Build and ship production-ready frontend features independently Lead projects from architecture to deployment Create UI components for salary visualization and comparison tools Develop interfaces for enterprise compensation benchmarking Enhance visual offer letter product Integrate frontend with backend APIs Work across multiple product verticals Debug and fix issues same day Contribute to consumer and enterprise products Tech Stack React Next.js JavaScript HTML CSS AWS S3 Git Crisp Static Site Generators Why Join? Join profitable seed-stage startup with 1.5M+ monthly users Work on mission-driven product democratizing career information Ship features same day in high-ownership environment Remote-first culture with flexible hours outside golden time Learn from founders with LinkedIn and AWS experience Competitive equity in growing company Minimal bureaucracy in 15-person team Extremely lean tech stack with minimal overhead Interview Process 15-minute knockout screening Algorithmic assessment Project-based assessment (frontend modifications) 30-minute culture fit Location - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Anywhere in INDIA, Remote
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
pune
Hybrid
Required Skills and Experience 8 to 10 years of experience in data engineering or backend development in cloud-based environments. Strong Python development experience, with emphasis on Pyspark for data transformation and parsing. Proficiency with PySpark for distributed data processing. Deep understanding of SQL, with hands-on experience in MySQL, PostgreSQL, or Snowflake. Strong experience with AWS services, especially: AWS Lambda and API Gateway for API development AWS Glue for orchestration and transformation Amazon S3, RDS, and ECS for data storage and compute Experience developing and managing RESTful APIs. Experience implementing and managing CI/CD pipelines. Hands-on experience with GitHub and version control in collaborative environments. This Opportunity is to build and scale data infrastructure for a security-focused enterprise platform. Ownership of technical design and implementation in a modern, cloud-native environment. Collaborative and high-performance culture focused on engineering excellence. Based in Pune with flexible work arrangements and professional development opportunities.
Posted 1 week ago
4.0 - 8.0 years
0 - 0 Lacs
maharashtra
On-site
The candidate should have hands-on experience in developing and managing AWS infrastructure and DevOps setup, along with the ability to delegate and distribute tasks effectively within the team. Responsibilities include deploying, automating, maintaining, and managing an AWS production system, ensuring reliability, security, and scalability. The role involves resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques. Additionally, the candidate will be responsible for automating different operational processes by designing, maintaining, and managing tools, providing primary operational support and engineering for cloud issues and application deployments, and leading the organization's platform security efforts by collaborating with the engineering team. They will also need to maintain/improve existing policies, standards, and guidelines for IAC and CI/CD that teams can follow, work closely with Product Owner and development teams for continuous improvement, and analyze and troubleshoot infrastructure issues while developing tools/systems for task automation. The required tech stack to handle daily operation workload and managing the team includes Cloud: AWS and AWS Services such as CloudFront, S3, EMR, VPC, VPN, EKS, EC2, CloudWatch, Kinesis, RedShift, Organization, IAM, Lambda, Kinesis, Code Commit, CodeBuild, ECR, CodePipeline, Secret Manager, SNS, Route53, as well as DevOps tools like SonarQube, FluxCD, Terraform, Prisma Cloud, Kong, and site 24*7. This position may require occasional travel to support cloud initiatives and attend conferences or training sessions. The role typically involves working various shifts to support customers in a 24/7 roster-based model within an office environment.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
Founded in 2018, LiftLab is the leading provider of science-driven software to optimize marketing spend and predict revenue for optimal spend levels. Our platform combines economic modeling with specialized media experimentation so brands and agencies across the globe can clearly see the tradeoffs of growth and profitability of their ad platform partners including FB/Instagram, Google, Display, Snapchat, TikTok and Pinterest. They use these signals to make media spend decisions with confidence. LiftLab customers know for each channel, tactic, campaign if the revenue for the last $1 spent covered its cost or not. LiftLab is built on a foundation of advanced algorithms and sophisticated media experimentation. Some of the most marquee brands already run LiftLab and were growing at a rapid pace. LiftLab is seeking a highly motivated and experienced Full Stack Developer to join our dynamic team. In this role, you will play a key part in shaping the future of data-driven marketing insights at LiftLab. You will work closely with our Development Manager and collaborate with cross-functional teams to build and maintain cutting-edge solutions in an AWS environment. If you are a talented developer with a strong background in both backend and frontend technologies and a passion for building innovative solutions, we invite you to be a part of our team. Key Responsibilities: - Full Stack Development: Design, develop, and maintain both backend and frontend software solutions to support LiftLab's marketing optimization platform. - Collaboration: Work closely with data scientists, product managers, and other engineers to implement features and improvements. - Backend Expertise: Write high-quality, scalable, and efficient code using Java 11+. - Frontend Development: Develop user interfaces using modern frontend frameworks like Angular. - Microservices Development: Develop modules in a polyglot environment using microservices architecture. - CI/CD Processes: Work in an automated CI/CD environment to ensure the continuous delivery of software. - Testing: Develop and execute unit test cases (e.g., JUnit) to ensure code reliability and quality. - AWS Integration: Utilize AWS services such as EKS, Lambda, S3, and RDS to build and deploy applications. - Troubleshooting: Troubleshoot and resolve technical issues in a timely manner. - Technical Leadership: Mentor junior engineers and provide technical guidance. - Continuous Learning: Stay up-to-date with industry trends and technologies to recommend enhancements to our software stack. Qualifications: - Education: Bachelors degree in Computer Science, Software Engineering, or a related field. A Master's degree is a plus. - Experience: 4-8 years of professional software development experience. - Backend Skills: Strong proficiency in Java 9+. - Frontend Skills: Experience with frontend technologies such as Angular, React, or Vue.js. - Web Technologies: Solid understanding of HTML, CSS, and JavaScript. - AWS Experience: Hands-on experience with AWS services like EKS, Lambda, S3, and RDS. - CI/CD: Experience with automated CI/CD processes. - Containerization: Knowledge of Docker and Kubernetes. - Testing: Proficiency in writing unit tests and conducting code reviews. - Microservices: Strong knowledge of microservices architecture. - Additional Skills: Basic knowledge of Python is a plus. - Soft Skills: Excellent problem-solving abilities, strong communication, and collaboration skills. - Passion: A keen interest in staying current with emerging technologies and trends in software development. LiftLab is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The successful candidate will be instrumental in helping to build, test, and maintain the infrastructure and tools to allow for the speedy development and release of software and updates. You will also be responsible for pushing the limits of a wide breadth of the tools and approaches to AI/Machine Learning for advanced products and systems using video information and IoT systems. Your experience researching and developing new projects with a team will be crucial, as you apply your knowledge and communication abilities to discuss problem requirements and plan new ideas with the team and stakeholders. Your responsibilities will include building and setting up new development tools and infrastructure, designing, building, and maintaining efficient, reusable, and tested code in Python and other applicable languages and library tools. Understanding the needs of stakeholders and effectively conveying them to developers, working on ways to automate and improve development and release processes, and bringing your professional experience in working with video technologies will be part of your day-to-day tasks. You will deploy Machine Learning (ML) to large production environments, drive continuous learning in the AI and computer vision fields, test and examine code written by others, analyze results, ensure system safety and security against cybersecurity threats, and identify technical problems and develop software updates and fixes. Additionally, you will work closely with software developers and engineers to ensure that development follows established processes and works as intended, as well as plan out projects and be involved in project management decisions. To qualify for this role, you must have a minimum of 3 years of hands-on experience with AWS services and products (Batch, SageMaker, StepFunctions, CloudFormation/CDK), strong Python experience, and at least 3 years of experience with Machine Learning/AI or Computer Vision development/engineering. You should be able to provide technical leadership to developers for designing and securing solutions, have an understanding of Linux utilities and Bash, familiarity with containerization using Docker, and experience with data pipeline frameworks such as MetaFlow. Exposure to technologies/tools like Keras, Pandas, TensorFlow, PyTorch, Caffe, NumPy, and DVC/CML is preferred. Practical experience deploying Computer Vision/Machine Learning solutions at scale into production will be an advantage. At GlobalLogic, we prioritize a culture of caring, where we consistently put people first. You will experience an inclusive culture of acceptance and belonging, build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. We are committed to your continuous learning and development, offering numerous opportunities to try new things, sharpen your skills, and advance your career. With interesting and meaningful work on impactful projects, a high-trust organization, and a focus on balance and flexibility, GlobalLogic provides a supportive and enriching environment for your professional growth and personal well-being.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The position at Iris Software in Noida, UP, India, offers an exciting opportunity for professionals looking to advance their career in the IT industry. As a part of one of India's Top 25 Best Workplaces, you will have the chance to contribute to the growth of one of the fastest-growing IT services companies. Thrive in an award-winning work culture that recognizes and supports your talent and career aspirations. Iris Software is committed to creating a work environment where employees feel valued, have room to explore their potential, and are provided with opportunities to grow both professionally and personally. As an associate at Iris Software, you will be part of a team dedicated to becoming our clients" most trusted technology partner. With a presence in India, U.S.A, and Canada, we assist enterprise clients in achieving technology-enabled transformation in various sectors like financial services, healthcare, transportation & logistics, and professional services. Our projects involve working on complex, mission-critical applications utilizing cutting-edge technologies such as Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. The ideal candidate for this position should possess expertise in AWS CDK, Services (Lambda, ECS, S3), and PostgreSQL DB management. A strong understanding of serverless architecture and event-driven design (SNS, SQS) is required. Additionally, knowledge of multi-account AWS setups, Security best practices (IAM, VPC, etc.), and experience in cost optimization strategies in AWS would be advantageous. Key Competencies: - Cloud - AWS - Cloud - AWS Lambda - Database - PostgreSQL - Cloud - ECS - Data on Cloud - AWS S3 - DevOps - CI/CD - Beh - Communication and collaboration Join Iris Software and enjoy a range of perks and benefits designed to support your financial, health, and well-being needs. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we prioritize creating a supportive and rewarding work environment for all our employees. Experience the difference of working at a company that values your success and happiness.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organizations seeking independent talent. Our client, a global leader in energy management and automation, is currently seeking a Data Engineer to prepare data and make it available in an efficient and optimized format for various data consumers, including BI, analytics, and data science applications. As a Data Engineer, you will work with current technologies such as Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on the AWS environment. Key Responsibilities: - Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or building new data objects. Automate data pipelines to streamline the process. - Implement DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. Design and implement end-to-end data integration tests and CICD pipelines. - Analyze existing data models, identify performance optimizations for data ingestion and consumption to accelerate data availability within the platform and for consumer applications. - Support client applications in connecting and consuming data from the platform, ensuring compliance with guidelines and best practices. - Monitor the platform, debug detected issues and bugs, and provide necessary support. Skills required: - Minimum of 3 years of prior experience as a Data Engineer with expertise in Big Data and Data Lakes in a cloud environment. - Bachelor's or Master's degree in computer science, applied mathematics, or equivalent. - Proficiency in data pipelines, ETL, and BI, regardless of the technology. - Hands-on experience with AWS services including at least 3 of: RedShift, S3, EMR, Cloud Formation, DynamoDB, RDS, Lambda. - Familiarity with Big Data technologies and distributed systems such as Spark, Presto, or Hive. - Proficiency in Python for scripting and object-oriented programming. - Fluency in SQL for data warehousing, with experience in RedShift being a plus. - Strong understanding of data warehousing and data modeling concepts. - Familiarity with GIT, Linux, CI/CD pipelines is advantageous. - Strong systems/process orientation with analytical thinking, organizational skills, and problem-solving abilities. - Ability to self-manage, prioritize tasks in a demanding environment. - Consultancy orientation and experience with the ability to form collaborative working relationships across diverse teams and cultures. - Willingness and ability to train and teach others. - Proficiency in facilitating meetings and following up with action items.,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Architect at our esteemed organization, you will play a pivotal role in the design and implementation of scalable, reliable, and high-performance software features. Your expertise in software design principles, system integration, and emerging technologies will be instrumental in driving innovation and guiding the development team towards delivering robust and efficient software solutions. Your responsibilities will encompass defining the overall architecture, technology stack, and development standards to ensure the successful completion of software projects. Your primary responsibilities will include the following key areas: Software Architecture Design: You will be responsible for defining the overall software architecture and technical roadmap for the organization. Technology Stack Selection: Evaluating and selecting appropriate technologies, frameworks, and tools for software development while staying updated on emerging technologies and industry trends. Platform Design / Changes: Providing guidance for any platform changes or enhancements and ensuring that technical designs are created by team leaders. Technical Leadership: Offering technical leadership and guidance to the development team, collaborating with developers to resolve technical challenges, reviewing code, and overseeing the PR and Merging process. Mentoring and Training: Mentoring and training team leaders and members to ensure they are well-informed and capable of making informed decisions during the implementation phase. Software Development Standards: Establishing coding standards, development guidelines, and best practices to ensure high-quality software development. Technical Debts: Reviewing technical debt items, offering guidance, and approving proposed changes. Performance and Scalability: Guiding the team towards implementing scalable and performant solutions capable of handling high volumes of data and user traffic. Security and Compliance: Defining and enforcing software security practices and standards to ensure compliance with data protection regulations and industry best practices. Collaboration and Communication: Collaborating effectively with other architects and teams, aligning software architecture with business goals, and effectively communicating technical concepts to non-technical stakeholders. Release Management: Defining the release management process and overseeing the overall deployment process. System Documentation: Maintaining system technical documentation in a structured manner and ensuring thorough documentation of all system changes. Requirements: - Bachelor's or master's degree in computer science, Engineering, or a related field. - Proven experience in a senior engineering role. - Excellent communication skills in English, both written and verbal. - Willingness to occasionally travel to the US for team meetings and training. - Ability to work in different time zones and manage teams across multiple locations. - Strong technical expertise in SaaS product development practices. - Familiarity with agile methodologies, particularly Scrum. - Strong organizational and time management skills. - Experience in developing SaaS products in a non-consulting environment. - Knowledge of software development lifecycle and best practices in software engineering. Technical Requirements: - 10+ years of proven experience as a software architect or senior software engineer. - Proficiency in software design principles, system architecture, and development methodologies. - Expertise in multiple programming languages and frameworks. - Experience with cloud-based technologies and microservices architecture. - Knowledge of database design and optimization. - Familiarity with software security principles and practices. - Strong problem-solving and analytical skills. - Up-to-date knowledge of emerging technologies and industry trends. - Ability to handle multiple projects and prioritize effectively. - Proficiency in Java, Spring, AWS services, Docker/Kubernetes, CI-CD pipelines, GitHub, Kafka, NoSQL databases, etc. About Aumni Techworks: Aumni Techworks, established in 2016, is a Software Services Company that partners with Product companies to build and manage dedicated teams in India. At Aumni, we emphasize quality work, long-term client relationships, and continuous growth opportunities for our employees. Benefits of working at Aumni Techworks: - Award-winning culture with a focus on quality work. - Comprehensive medical and life insurance coverage. - Generous leave policies and additional leaves for various purposes. - On-site recreational facilities. - Hybrid work culture. - Fitness group and rewards programs. - Social events and annual parties for team building and relaxation.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
We are seeking an enthusiastic RPA & Intelligent Automation professional to join our newly founded CoE at Booking.com. Your role will involve supporting the rapidly growing portfolio of automation projects and making a significant impact across all business areas. As a valuable member of our team, you will operate with a high degree of autonomy and entrepreneurial spirit, acting as a service provider for the entire company. In this role, you will be responsible for driving efficiencies and seeking accountability both from yourself and others. Collaboration and a sense of comradery are essential qualities we are looking for in our ideal candidate. You should be willing to be cross-functional, continuously learn new skills, and strive for continuous improvement and high quality in your work. A strong work ethic, positive attitude, and a passion for solving real-world problems through technology are key attributes we value. To excel in this position, you should have 2-3 years of experience in developing with Blue Prism and hold a degree in CS, Engineering, or a related field. Blue Prism certification is a mandatory requirement for this role. Proficiency in core Python libraries such as pandas and NumPy, as well as familiarity with AI/ML frameworks like Tensorflow and PyTorch, are highly preferred. Knowledge of NLP techniques like text summarization, sentiment analysis, and Named Entity Recognition would be advantageous. Additionally, experience with AWS components such as RDS, EC2, S3, IAM, CloudWatch, Lambda, Sagemaker, and VPC is beneficial. Familiarity with tools like VAULT, PASSPORT, and Gitlab for UAM/Config Management, as well as exposure to Terraform for deploying AWS services, would be a plus. Professional experience with SQL, .NET, C#, HTTP APIs, and Web Services is required. Previous experience in designing, developing, deploying, and maintaining software, along with working in a scrum/agile environment, is desirable. Excellent communication skills in English are essential for this role. If you are a motivated and skilled professional looking to make a difference in the field of RPA and Intelligent Automation, we encourage you to apply for this exciting opportunity at Booking.com.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. You will execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Creating secure and high-quality production code and maintaining algorithms that run synchronously with appropriate systems will be part of your tasks. You will produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture will be crucial. You will also contribute to software engineering communities of practice and events that explore new and emerging technologies, adding to the team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills include being strong in AWS Services like Redshift, Glue, S3, Terraform for infrastructure setup, and Python and ETL development. Formal training or certification on software engineering concepts is preferred, along with hands-on practical experience in system design, application development, testing, and operational stability. Proficiency in coding in one or more languages, 7+ years of experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages, and overall knowledge of the Software Development Life Cycle are essential. A solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security is required, along with demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.). Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies and exposure to cloud technologies.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate for the position of Salesforce Heroku Developer should have deep experience with Heroku, Salesforce CRM, and cloud integration. You will be responsible for leading and supporting Salesforce Cloud to AWS migration initiatives. Your key responsibilities will include designing, building, and supporting Heroku applications integrated with Salesforce CRM, leading technical implementation of migration projects, re-architecting existing Heroku apps/services to align with AWS infrastructure, developing APIs and data pipelines for integrations, collaborating with DevOps and Cloud teams for building scalable CI/CD pipelines, troubleshooting and optimizing performance for Heroku and AWS-hosted services, and creating architecture documentation and technical delivery guides. You must have strong hands-on experience with the Salesforce Heroku platform, proficiency in Ruby on Rails, experience in migrating applications to AWS, solid knowledge of Apex, Heroku Connect, Postgres, and API integrations, working knowledge of AWS services like Lambda, API Gateway, S3, RDS, DynamoDB, familiarity with microservices architecture and containerization, strong programming skills in Node.js, Python, or Java, and experience with CI/CD tools like GitHub Actions, Jenkins, and Git. Good-to-have skills include Salesforce certifications, experience with Terraform or AWS CloudFormation, knowledge of data security and compliance in cloud environments, exposure to Agile/Scrum methodologies, and DevOps culture. The preferred experience range for this role is 6-10 years with at least 3+ years on Heroku and AWS integration projects.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
You are a growing logistics technology company that is currently developing a cutting-edge Trucking Management System (TMS) to enhance dispatching, load tracking, driver management, and automation in logistics operations. The TMS integrates real-time tracking, AI-driven analytics, and automation to optimize fleet operations. As a Full Stack Developer, your expertise in React, Node.js, PostgreSQL, and AWS will be crucial in enhancing the TMS platform. The ideal candidate should possess experience in logistics software, API integrations, and scalable architectures, with a preference for those with Team handling experience. Your responsibilities will include: - Front-End Development: Developing a modern user-friendly interface using React, implementing Redux and RTK for state management and HTTP requests, designing clean UI with Material-UI, and integrating Google Maps API and HERE Maps API. - Back-End Development: Developing and maintaining APIs using Node.js, implementing JWT-based authentication, building and maintaining a RESTful API, and optimizing performance for real-time operations. - Database Management: Using PostgreSQL for structured data storage, leveraging MongoDB where needed, and ensuring database performance, security, and scalability. - Cloud Infrastructure & Deployment: Deploying and managing services on AWS, optimizing server performance and cloud costs, and implementing scalable and secure cloud-based solutions. - Security & Compliance: Ensuring data security, role-based access control, session timeout mechanisms, and implementing logging and audit trails for user activities. Required Skills & Qualifications: - 5+ years of full-stack development experience, preferably in logistics or SaaS. - Expertise in React, Redux, Material-UI, RTK, Vite, Node.js, Express, PostgreSQL, MongoDB, Google Maps API, HERE Maps API, and AWS (EC2, S3, RDS). - Strong understanding of RESTful API design and authentication (JWT). Nice to Have: - Experience in AI/ML for logistics optimization, knowledge of IoT & telematics integrations, background in TMS or supply chain software development. Join us to work on an innovative logistics automation product, with growth opportunities in a fast-scaling startup and the freedom to innovate and implement new technologies. This is a full-time position with a day shift from Monday to Friday, based in Mohali, Punjab. Relocation before starting work is required for this role. You should have at least 5 years of experience as a Full Stack Developer.,
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
kochi, kerala
On-site
As a Technical Lead at Cavli Wireless, you will be responsible for leading the design, development, and deployment of scalable cloud-based solutions. You will collaborate with cross-functional teams to ensure the seamless integration of cloud technologies in support of our IoT products and services. Your key responsibilities will include spearheading the design and implementation of cloud infrastructure and application architectures, ensuring they are scalable, secure, and highly available. In this role, you will provide technical leadership by offering guidance and mentorship to development teams, fostering a culture of continuous improvement and adherence to best practices. You will conduct thorough code reviews, debugging sessions, and knowledge-sharing initiatives to maintain high-quality code standards. Additionally, you will collaborate with stakeholders to gather and translate business requirements into technical specifications and actionable tasks. As a Technical Lead, you will define project scope, timelines, and deliverables in coordination with stakeholders to ensure alignment with business objectives. You will advocate for and implement industry best practices in coding, testing, and deployment processes while utilizing code versioning tools like GitHub to manage and track code changes effectively. The ideal candidate for this role should possess expertise in Angular framework for frontend technologies and proficiency in Node.js for backend technologies. Strong command over TypeScript and JavaScript, along with a working knowledge of Python, is required. Extensive experience with AWS services such as EC2, Lambda, S3, RDS, VPC, IoT Core, API Gateway, DynamoDB, and proficiency in using DevOps tools like CodePipeline, CodeDeploy, and CloudFormation are essential. A Bachelor's degree in Computer Science, Information Technology, or a related field (B.Tech/MCA) is required for this position. As a leader, you are expected to lead by example, demonstrating technical excellence and a proactive approach to problem-solving. You will mentor junior developers, provide guidance on technical challenges and career development, and foster a collaborative and inclusive team environment by encouraging open communication and knowledge sharing.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Staff Analyst in Business Intelligence at Bloom Energy, you have the opportunity to be part of a revolutionary company that aims to transform how energy is generated and distributed globally. The company's mission is to provide clean, reliable, and affordable energy to customers around the world through its innovative Bloom Energy Server. Reporting to the Business Intelligence Senior Manager in Mumbai, India, you will play a crucial role in designing, implementing, and maintaining full-stack applications. Your responsibilities will include developing optimization algorithms, collaborating with stakeholders to integrate user feedback, and leveraging technological advancements to improve operational efficiency. Key Responsibilities: - Develop optimization algorithms and production-ready tools for Service Operations - Create software tools to enhance work efficiency and support informed decision-making in service operations - Assist in data analysis and scenario planning for the operations team - Build automated monitoring tools for critical customer performance - Manage databases and applications to ensure smooth functioning - Address bugs, troubleshoot problems, and continuously work towards enhancing products and technologies - Provide mentorship and training to junior team members Requirements: - Proficiency in object-oriented programming, data structures, algorithms, and web application development - Strong hands-on experience with back-end languages like Python, Ruby, and Java - Familiarity with databases such as PostgreSQL, Cassandra, AWS RDS, Redshift, and S3 - Knowledge of front-end languages like HTML, CSS, JavaScript, React, Redux, Vue, or Angular is a plus - Experience with version control software like Git - Understanding of distributed systems, test-driven development, SQL and NoSQL databases, performance optimization tools, and AWS services for app deployment - Excellent problem-solving skills Education: - Bachelor's degree in Computer Science, Computer Engineering, or related fields About Bloom Energy: Bloom Energy is committed to a 100% renewable future by offering resilient electricity solutions that can withstand power disruptions. The company's fuel-flexible technology has demonstrated exceptional reliability in extreme conditions like hurricanes, earthquakes, and utility failures. Bloom Energy's fuel cells produce no harmful local air pollutants, aligning with the shift towards renewable fuels like hydrogen and biogas. The company serves a wide range of industries including manufacturing, data centers, healthcare, retail, and more. For more information, please visit www.bloomenergy.com.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
As a Software Engineer at micro1, you will play a crucial role in developing robust and scalable backend and data engineering solutions. Your expertise in data engineering, backend development, and cloud infrastructure will be utilized to design, build, and maintain cutting-edge systems. You will have the opportunity to work with industry-leading technologies and be at the forefront of AI innovation. Your responsibilities will include working with distributed processing systems such as Apache Spark to create data pipelines, deploying applications on AWS cloud platforms using services like EKS, S3, EC2, and Lambda, containerizing and orchestrating services with Docker and Kubernetes (EKS), and implementing storage solutions involving RDBMS and analytics databases. You will be expected to write clean, maintainable code in languages such as Python, Java, C#, or R, and apply strong software design principles to ensure performance, reliability, and scalability. Collaboration with product and engineering teams to deliver end-to-end features, staying current with industry trends, and embracing continuous learning are integral parts of this role. The ideal candidate for this position will have 6-8 years of hands-on software development experience in a product development environment, expertise in Big Data technologies like Apache Spark, strong familiarity with AWS cloud services and containerization, proficiency in languages like Python, Java, C#, or R, and a solid understanding of software performance, scalability, and reliability best practices. Strong analytical and logical thinking skills, excellent communication abilities, and the willingness to adapt to new technologies quickly are also essential qualifications for this role. If you are a self-driven technologist with a passion for building scalable and reliable systems, we encourage you to apply for this exciting opportunity at micro1. Join us in our mission to match talented individuals with their dream jobs and be part of a dynamic team that is shaping the future of AI innovation.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Platform Support Manager at Guidewire Global Support (GGS) Platform team, you will have the exciting opportunity to lead a team of 8-12 Platform Support Engineers. Your responsibilities include motivating, hiring/mentoring/coaching, writing performance reviews, and preparing overall performance evaluations for your team. Daily coordination and guidance to ensure courteous, timely, high-quality, and effective responses to customer issues are essential. Building and managing relationships with other Platform Support Managers, participating in continuous improvement projects, and developing action plans based on customer satisfaction surveys are crucial aspects of this role. You will be expected to provide 24x7 support for customers, so being available for after-hours production emergencies is vital. Developing a broad knowledge of Guidewire Cloud Platform and software products is necessary. Your technical skills should include Object-Oriented Programming, relational databases, XML, and Cloud architecture. Experience with AWS services, Java web applications, software development lifecycle, CI/CD concepts, and customer incident tracking systems is highly desirable. Collaborating with Service Providers and Partner organizations, handling multiple tasks with changing priorities, and demonstrating critical attention to detail are important qualities for this position. To be successful in this role, you should have a Bachelor's Degree in Computer Science or a related field, with at least 8 years of technical experience and 3 years of supervisory or leadership experience of a customer-facing IT/Technical team. Dedication to customer service, strong interpersonal skills, and the ability to establish relationships with all levels of management are key. Fluency in English is required, and proficiency in another language such as French or German is a plus. Occasional travel to other Guidewire offices for training and meetings may be expected.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join us as an Apply Domain OneTeam Lead at Barclays, where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. As a part of team of developers, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. To be successful as an Apply Domain OneTeam Lead, you should have experience with expertise knowledge and experience in AWS (especially services like Cloudwatch, DynamoDB, Lambda, ECS, CloudFormation, Kinesis, EC2, S3, Api Gateway and Load Balancers), good hands-on experience in Jenkins, sound knowledge on ITIL processes (Incident, Problem, and Change Management), as well as strong inter-personal skills and stakeholder management. Some other highly valued skills may include experience in using or implementing code deployment tools to create and control infrastructure as code, good to have knowledge in automation and scripting skills with Python/Java, and experience in automating maintenance/management for Docker-like environment. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the Role: To build and maintain infrastructure platforms and products that support applications and data systems, using hardware, software, networks, and cloud computing platforms as required with the aim of ensuring that the infrastructure is reliable, scalable, and secure. Ensure the reliability, availability, and scalability of the systems, platforms, and technology through the application of software engineering techniques, automation, and best practices in incident response. Accountabilities: Build Engineering: Development, delivery, and maintenance of high-quality infrastructure solutions to fulfill business requirements ensuring measurable reliability, performance, availability, and ease of use. Including the identification of the appropriate technologies and solutions to meet business, optimization, and resourcing requirements. Incident Management: Monitoring of IT infrastructure and system performance to measure, identify, address, and resolve any potential issues, vulnerabilities, or outages. Use of data to drive down mean time to resolution. Automation: Development and implementation of automated tasks and processes to improve efficiency and reduce manual intervention, utilizing software scripting/coding disciplines. Security: Implementation of a secure configuration and measures to protect infrastructure against cyber-attacks, vulnerabilities, and other security threats, including protection of hardware, software, and data from unauthorized access. Teamwork: Cross-functional collaboration with product managers, architects, and other engineers to define IT Infrastructure requirements, devise solutions, and ensure seamless integration and alignment with business objectives via a data-driven approach. Learning: Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations: To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/business divisions. Lead a team performing complex tasks, using well-developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviors are: L Listen and be authentic, E Energize and inspire, A Align across the enterprise, D Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialization to complete assignments. They will identify new directions for assignments and/or projects, identifying a combination of cross-functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires an understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. Collaborate with other areas of work, for business-aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practices (in other areas, teams, companies, etc.) to solve problems creatively and effectively. Communicate complex information. "Complex" information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge and Drive the operating manual for how we behave.,
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for designing, building, and maintaining scalable and efficient data pipelines to facilitate the movement of data between cloud-native databases (e.g., Snowflake) and SaaS providers using AWS Glue and Python. Your role will involve implementing and managing ETL/ELT processes to ensure seamless data integration and transformation while adhering to information security and compliance with data governance standards. Additionally, you will be tasked with maintaining and enhancing data environments, including data lakes, warehouses, and distributed processing systems. It is crucial to utilize version control systems (e.g., GitHub) effectively to manage code and collaborate with the team. In terms of primary skills, you should possess expertise in enhancements, new development, defect resolution, and production support of ETL development using AWS native services. Your responsibilities will also include integrating data sets using AWS services such as Glue and Lambda functions, utilizing AWS SNS for sending emails and alerts, authoring ETL processes using Python and PySpark, monitoring ETL processes using CloudWatch events, connecting with different data sources like S3, and validating data using Athena. Experience in CI/CD using GitHub Actions, proficiency in Agile methodology, and extensive working experience with Advanced SQL are essential for this role. Furthermore, familiarity with Snowflake and understanding its architecture, including concepts like internal and external tables, stages, and masking policies, is considered a secondary skill. Your competencies and experience should include deep technical skills in AWS Glue (Crawler, Data Catalog) for over 10 years, hands-on experience with Python and PySpark for over 5 years, PL/SQL experience for over 5 years, CloudFormation and Terraform for over 5 years, CI/CD GitHub actions for over 5 years, experience with BI systems (PowerBI, Tableau) for over 5 years, and a good understanding of AWS services like S3, SNS, Secret Manager, Athena, and Lambda for over 5 years. Additionally, familiarity with Jira and Git is highly desirable. This position requires a high level of technical expertise in AWS Glue, Python, PySpark, PL/SQL, CloudFormation, Terraform, GitHub actions, BI systems, and AWS services, along with a solid understanding of data integration, transformation, and data governance standards. Your ability to collaborate effectively with the team, manage data environments efficiently, and ensure the security and compliance of data will be critical for the success of this role.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |