Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description: Job Description Role Purpose The purpose of the role is to resolve, maintain and manage client’s software/ hardware/ network based on the service requests raised from the end-user as per the defined SLA’s ensuring client satisfaction ͏ Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of client’s network/ server/ system/ storage/ platform/ infrastructure and other equipment’s to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner ͏ Deliver NoPerformance ParameterMeasure1. 100% adherence to SLA/ timelines Multiple cases of red time Zero customer escalation Client appreciation emails ͏ ͏ Mandatory Skills: AWS Cloud Management . Experience: 3-5 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... We are seeking a visionary and technically strong Senior AI Architect to join our Billing IT organization in driving innovation at the intersection of telecom billing, customer experience, and artificial intelligence. This leadership role will be pivotal in designing, developing, and scaling AI-led solutions that redefine how we bill our customers, improve their billing experience, and derive actionable insights from billing data. You will work closely with cross-functional teams to lead initiatives that transform customer-facing systems, backend data platforms, and software development practices through modern AI technologies. Key Responsibilities Customer Experience Innovation: Designing and implementing AI-driven enhancements to improve telecom customer experience, particularly in the billing domain. Leading end-to-end initiatives that personalize, simplify, and demystify billing interactions for customers. AI Tools and Platforms: Evaluating and implementing cutting-edge AI/ML models, LLMs, SLMs, and AI-powered solutions for use across the billing ecosystem. Developing prototypes and production-grade AI tools to solve real-world customer pain points. Prompt Engineering & Applied AI: Exhibiting deep expertise in prompt engineering and advanced LLM usage to build conversational tools, intelligent agents, and self-service experiences for customers and support teams. Partnering with design and development teams to build intuitive AI interfaces and utilities. AI Pair Programming Leadership: Demonstrating hands-on experience with AI-assisted development tools (e.g., GitHub Copilot, Codeium). Driving adoption of such tools across development teams, track measurable productivity improvements, and integrate into SDLC pipelines. Data-Driven Insight Generation: Leading large-scale data analysis initiatives using AI/ML methods to generate meaningful business insights, predict customer behavior, and prevent billing-related issues. Establishing feedback loops between customer behavior and billing system design. Thought Leadership & Strategy: Acting as a thought leader in AI and customer experience within the organization. Staying abreast of trends in AI and telecom customer experience; regularly benchmark internal initiatives with industry best practices. Architectural Excellence: Owning and evolve the technical architecture of AI-driven billing capabilities, ensuring scalability, performance, security, and maintainability. Collaborating with enterprise architects and domain leads to align with broader IT and digital transformation goals. Telecom Billing Domain Expertise: Bring deep understanding of telecom billing functions, processes, and IT architectures, including usage processing, rating, billing cycles, invoice generation, adjustments, and revenue assurance. Providing architectural guidance to ensure AI and analytics solutions are well integrated into core billing platforms with minimal operational risk. Where you'll be working... In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What We’re Looking For... You’re energized by the prospect of putting your advanced expertise to work as one of the most senior members of the team. You’re motivated by working on groundbreaking technologies to have an impact on people’s lives. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Six or more years of relevant experience required, demonstrated through one or a combination of work Strong understanding of AI/ML concepts, including generative AI, LLMs (Large Language Models) etc with the ability to evaluate and apply them to solve real-world problems in telecom and billing. Familiarity with industry-leading AI models and platforms (e.g., OpenAI GPT, Google Gemini, Microsoft Phi, Meta LLaMA, AWS Bedrock), and understanding of their comparative strengths, pricing models, and applicability. Ability to scan and interpret AI industry trends, identify emerging tools, and match them to business use cases (e.g., bill explainability, predictive analytics, anomaly detection, agent assist). Skilled in adopting and integrating third-party AI tools—rather than building from scratch—into existing IT systems, ensuring fit-for-purpose usage with strong ROI. Experience working with AI product vendors, evaluating PoCs, and influencing make-buy decisions for AI capabilities. Comfortable guiding cross-functional teams (tech, product, operations) on where and how to apply AI tools, including identifying appropriate use cases and measuring impact. Deep expertise in writing effective and optimized prompts across various LLMs. Knowledge of prompt chaining, tool-use prompting, function calling, embedding techniques, and vector search optimization. Ability to mentor others on best practices for LLM prompt engineering and prompt tuning. In-depth understanding of telecom billing functions: mediation, rating, charging, invoicing, adjustments, discounts, taxes, collections, and dispute management. Strong grasp of billing SLAs, accuracy metrics, and compliance requirements in a telcom environment. Proven ability to define and evolve cloud-native, microservices-based architectures with AI components. Deep understanding of software engineering practices including modular design, API-first development, testing automation, and observability. Experience in designing scalable, resilient systems for high-volume data pipelines and customer interactions. Demonstrated hands-on use of tools like GitHub Copilot, Codeium, AWS CodeWhisperer, etc. Strong track record in scaling adoption of AI pair programming tools across engineering teams. Ability to quantify productivity improvements and integrate tooling into CI/CD pipelines. Skilled in working with large-scale structured and unstructured billing and customer data. Proficiency in tools like SQL, Python (Pandas, NumPy), Spark, and data visualization platforms (e.g., Power BI, Tableau). Experience designing and operationalizing AI/ML models to derive billing insights, detect anomalies, or improve revenue assurance. Excellent ability to translate complex technical concepts to business stakeholders. Influential leadership with a track record of driving innovation, change management, and cross-functional collaboration. Ability to coach and mentor engineers, analysts, and product owners on AI technologies and best practices. Keen awareness of emerging AI trends, vendor platforms, open-source initiatives, and market best practices. Active engagement in AI communities, publications, or proof-of-concept experimentation. Even better if you have one or more of the following: A master’s degree If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 1 day ago
3.0 - 6.0 years
9 - 13 Lacs
Pune
Work from Office
Job Summary Bluphlux is seeking a talented Data Scientist specializing in Natural Language Processing (NLP) to join our innovative team. As a Data Scientist at Bluphlux, you will play a crucial role in developing and enhancing our AI-driven recruitment solutions. You will work with cutting-edge technologies to improve the accuracy and efficiency of our language models, ensuring that our clients receive the best possible candidates for their needs. Key Responsibilities Develop and implement NLP models to enhance our recruitment platform. Collaborate with cross-functional teams to integrate NLP solutions into existing systems. Analyze and process large datasets to extract meaningful insights and improve model performance. Stay up-to-date with the latest advancements in NLP and AI to ensure our solutions remain at the forefront of technology. Conduct experiments and evaluate the effectiveness of different NLP techniques and algorithms. Required Qualifications Bachelors or Masters degree in Computer Science, Data Science, or a related field. Proven experience in developing and deploying NLP models. Strong programming skills in Python and familiarity with NLP libraries such as NLTK, SpaCy, or similar. Experience with machine learning frameworks such as TensorFlow or PyTorch. Excellent problem-solving skills and attention to detail. Preferred Skills PhD in a relevant field is a plus. Experience with cloud platforms such as AWS or Google Cloud. Familiarity with recruitment processes and HR technologies. Strong communication skills and ability to work in a team environment.
Posted 1 day ago
2.0 - 5.0 years
6 - 9 Lacs
Pune
Work from Office
Job Summary Bluphlux is seeking a talented Machine Learning Engineer specializing in Natural Language Processing (NLP) to join our innovative team. As a Machine Learning Engineer at Bluphlux, you will play a crucial role in developing and enhancing our AI algorithms that revolutionize the recruitment process. You will work on cutting-edge projects that leverage Language Models (LLMs) to improve the efficiency and accuracy of resume ranking against job descriptions. Key Responsibilities Develop and implement NLP models to enhance our recruitment platform. Collaborate with cross-functional teams to integrate AI solutions into existing systems. Optimize and fine-tune language models for improved performance and accuracy. Conduct research to stay updated with the latest advancements in NLP and machine learning. Analyze and interpret complex data to provide actionable insights for product improvement. Required Qualifications Bachelors or Masters degree in Computer Science, Engineering, or a related field. Proven experience in machine learning and NLP projects. Strong programming skills in Python and familiarity with libraries such as TensorFlow or PyTorch. Experience with language models and natural language processing techniques. Excellent problem-solving skills and attention to detail. Preferred Skills PhD in a relevant field is a plus. Experience with cloud platforms such as AWS or Google Cloud. Familiarity with recruitment processes and HR technologies. Strong communication skills and ability to work in a team environment.
Posted 1 day ago
6.0 - 9.0 years
5 - 15 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
Dear Candidate, Please find below Job Description Azure SRE Lead Lead and mentor the team, foster SRE mindset & culture. Define SRE SLO, SLI & runbook. Recommended corrective actions and solutions for auto-healing. Regards Divya Grover +91 8448403677
Posted 1 day ago
2.0 - 6.0 years
5 - 8 Lacs
Kochi
Work from Office
> Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker) , and a solid understanding of Data Engineering and MLOps practices . You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: Design and implement machine learning models and pipelines using AWS SageMaker and related services. Develop and maintain robust data pipelines for training and inference workflows. Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. Optimize model performance and ensure scalability and reliability in production environments. Monitor deployed models for drift, performance degradation, and anomalies. Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: Strong programming skills in Python and familiarity with ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Solid understanding of machine learning algorithms , model evaluation, and tuning. Hands-on experience with AWS ML services , especially SageMaker , S3, Lambda, Step Functions, and CloudWatch. Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration . Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). Familiarity with monitoring tools and logging frameworks for ML systems. Excellent problem-solving and communication skills. Preferred Qualifications: AWS Certification (e.g., AWS Certified Machine Learning - Specialty). Experience with real-time inference and streaming data. Knowledge of data governance, security, and compliance in ML systems.
Posted 1 day ago
2.0 - 5.0 years
5 - 6 Lacs
Kochi
Work from Office
> Key Responsibilities: Lead the design, development, and implementation of full-stack solutions using React JS, Node.js, and Typescript. Provide technical guidance and mentorship to the development team. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Architect scalable and maintainable full-stack applications on cloud platforms such as Azure Cloud or AWS. Conduct code reviews and ensure adherence to coding standards and best practices. Drive innovation and continuous improvement in development processes and technologies. Develop and maintain documentation for architecture, design, and code. Stay updated with emerging technologies and trends in full-stack development and cloud computing Primary Skills: Proficiency in React JS, Node.js. Strong experience in developing full-stack solutions. Expertise in cloud platforms such as Azure Cloud or AWS. Ability to lead and guide a development team. > Expertise in event-driven microservice Hands-on coding experience and ability to provide technical solutions. Secondary Skills: Experience with Kubernetes and Helm Charts. >. Understanding of any messaging system Kafka/Service Bus etc > modelling/Designing database schema and optimizing DB interaction (SQL) >. Provisiojning or defining a Ci/CD pipeline for a web application/microserice >. Cloud architecture concepts
Posted 1 day ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Java Full Stack Developer Lead Experience Level: 5–8 Years Location: [Bangalore, Chennai, Hyderabad] / Hybrid Job Type: Full-Time Budget: 15-18 LPA (Based on experience) Job Summary: We are looking for an experienced and highly skilled Java Full Stack Developer with 5–8 years of hands-on experience in designing and developing robust, scalable web applications. The ideal candidate should be proficient in Java, Spring Framework, Angular, and Microservices architecture, with a strong grasp of Data Structures and Algorithms. You will work on high-impact projects, collaborating with cross-functional teams to design, build, and deploy innovative software solutions. Key Responsibilities: Design and develop scalable and high-performance web applications using Java (Spring Boot) and Angular. Architect and implement microservices using best practices for distributed systems. Write clean, maintainable, and efficient code while following industry best practices. Collaborate with front-end and back-end developers, QA engineers, and product managers. Optimize application performance, scalability, and security. Participate in code reviews, mentor junior developers, and lead technical discussions. Solve complex problems with innovative and efficient algorithms. Ensure high-quality deliverables through unit testing and continuous integration. Stay updated on emerging technologies and development practices. Required Skills and Qualifications: 5–8 years of experience in software development, with a strong focus on full stack Java development. Expert-level proficiency in Java, Spring Boot, and Spring Framework (Core, Security, JPA, MVC). Strong front-end experience using Angular, HTML, CSS, TypeScript, and responsive design. Hands-on experience in developing and maintaining RESTful APIs and Microservices architecture. Solid understanding and practical application of Data Structures and Algorithms. Experience with databases like MySQL, PostgreSQL, or MongoDB. Working knowledge of CI/CD tools (e.g., Jenkins, GitLab CI) and containerization technologies like Docker and Kubernetes. Proficient in using version control tools like Git. Experience with cloud platforms (AWS, Azure, or GCP) is a plus. Strong problem-solving and analytical thinking skills. Excellent communication and collaboration abilities. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience with message brokers like Kafka or RabbitMQ. Exposure to Agile/Scrum development methodologies. Familiarity with automated testing frameworks. Show more Show less
Posted 1 day ago
2.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary We are seeking a highly skilled and experienced DBA to join our expanding Information Technology team. In this role, you will help develop and design technology solutions that are scalable, relevant, and critical to our company’s success. You will join the team working on our new platform being built using MS SQL Server and MYSQL Server preferably. You will participate in all phases of the development lifecycle, implementation, maintenance and support and must have a solid skill set, a desire to continue to grow as a Database Administrator, and a team-player mentality. Key Responsibilities 1. Primary responsibility will be the management of production databases servers, including security, deployment, maintenance and performance monitoring. 2. Setting up SQL Server replication, mirroring and high availability as would be required across hybrid environments. 3. Design and implementation of new installations, on Azure, AWS and cloud hosting with no specific DB services. 4. Deploy and maintain on premise installations of SQL Server on Linux/ MySQL installation. 5. Database security and protection against SQL injection, exploiting of intellectual property, etc., 6. To work with development teams assisting with data storage and query design/optimization where required. 7. Participate in the design and implementation of essential applications. 8. Demonstrate expertise and add valuable input throughout the development lifecycle. 9. Help design and implement scalable, lasting technology solutions. 10. Review current systems, suggesting updates as would be required. 11. Gather requirements from internal and external stakeholders. 12. Document procedures to setup and maintain a highly available SQL Server database on Azure cloud, on premise and Hybrid environments. 13. Test and debug new applications and updates 14. Resolve reported issues and reply to queries in a timely manner. 15. Remain up to date on all current best practices, trends, and industry developments. 17. Identify potential challenges and bottlenecks in order to address them proactively. Key Competencies/Skillsets SQL Server management on Hybrid environments (on premise and cloud, preferably, Azure, AWS) MySQL Backup, SQL Server Backup, Replication, Clustering, Log shipping experience on Linux/ Windows. Setting up, management and maintenance of SQL Server/ MySQL on Linux. Experience with database usage and management Experience in implementing Azure Hyperscale database Experience in Financial Services / E-Commerce / Payments industry preferred. Familiar with multi-tier, object-oriented, secure application design architecture Experience in cloud environments preferably Microsoft Azure on Database service tiers. Experience of PCI DSS a plus SQL development experience is a plus Linux experience is a plus Proficient in using issue tracking tools like Jira, etc. Proficient in using version control systems like Git, SVN etc. Strong understanding of web-based applications and technologies Sense of ownership and pride in your performance and its impact on company’s success Critical thinker and problem-solving skills Excellent communication skills and ability to communicate with client’s via different modes of communication email, phone, direct messaging, etc Preferred Education and Experience 1. Bachelor’s degree in computer science or related field 2. Minimum 2 to 4 years’ experience as SQL Server DBA and in MSSQL including Replication, InnoDB Cluster, Upgrading and Patching. 3. Ubuntu Linux knowledge is perferred. 3. MCTS, MCITP, and/or MVP/ Azure DBA/MySQL certifications a plus Show more Show less
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka
On-site
While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth. If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi! Role: Associate Architect - Machine Learning Experience Level: 5 to 9 years Location: Bangalore / Mumbai (Hybrid) Roles and Responsibilities: As a core member of the Churn Modelling team, you will contribute to the technical design and implementation of churn prediction solutions, ensuring they align with project goals and architectural standards. Staying up to date with the latest advancements in churn prediction techniques, statistical modelling, and machine learning, and applying these to improve existing solutions. Communicating complex technical concepts in a clear and concise way to technical and non-technical stakeholders, including business analysts and customers. Designing and implementing data pipelines for churn modelling, including data pre-processing, feature engineering, and model deployment. Collaborating with cross-functional teams to identify business needs and provide technical guidance on churn prediction initiatives. Contributing to the development of best practices and standards for churn modelling within the organization. Participating in direct customer interactions to understand business requirements and present technical solutions. Skill Set Needed: Experience for 5 to 9 years in building end-to-end Machine Learning pipelines, with a focus on churn prediction. Strong proficiency in Python, including libraries such as NumPy and Pandas for data manipulation and analysis. Experience with data visualization tools and techniques to effectively communicate insights from churn data. Proven ability to understand business requirements and translate them into technical solutions for churn prediction. Hands-on experience with GCP Services relevant to ML workflows, such as BigQuery for data warehousing, GCS Bucket for data storage, Vertex AI for model training and deployment, and Cloud Run for serving models. Experience with model interpretability and Explainable AI (XAI) techniques to understand the drivers of churn. Experience working with Big Data technologies and performing ETL processes. Strong SQL skills for data extraction and manipulation. Expertise in applying statistical and machine learning models for churn prediction, including Logistic Regression, Random Forest, XGBoost, etc. Strong communication skills with the ability to interact directly with customers to gather requirements and present solutions. Experience with Time Series Analysis and Survival Analysis techniques for churn prediction. Knowledge of Graph-based techniques for clustering and analyzing customer dynamics. Familiarity with Recommender Systems and their potential application in churn prevention. Understanding of Control Theory concepts, particularly as applied in Reinforcement Learning for optimizing customer retention strategies (Good to Have). Programming Language: Python Cloud Platform: GCP Good to Have: Experience with MLOps practices for deploying and managing churn models in production. Familiarity with other cloud platforms (e.g., AWS, Azure). Experience with A/B testing and experimentation for evaluating churn prevention strategies. Contributions to open-source projects related to machine learning or churn prediction. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us !
Posted 1 day ago
5.0 - 10.0 years
15 - 27 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
About Gracenote : Gracenote is the top provider of entertainment information, creating industry-leading databases of TV, movie, and music metadata for entertainment guides, applications and in-car entertainment. Our technology serves billions of requests daily to hundreds of millions of devices around the world. Our customers include innovators like Apple, Twitter, Google, Spotify, M-GO and Hulu, top consumer electronics and cable companies, and leading automotive manufacturers such as Ford and Toyota, throughout the US and the world. Simply put, data provides you with an opportunity to impact the evolution of the entire entertainment industry. Purpose: As a junior devops engineer, your role is to support the development teams by assisting in automation of deployment processes, monitoring system performance and troubleshooting issues. You will gain hands-on experience in implementing devops tools and practices, contributing to efficiency and reliability of our software delivery. Overall Responsibilities: Deploy, automate, maintain, and troubleshoot DevOps managed shared services Assist in Design, Implement and maintain scalable, reliable infrastructure to support SDLC Develop and manage robust CI/CD pipelines to streamline software releases and updates Ensure the availability, performance, scalability and security of production systems Contribute to automating system configurations, deployments and monitoring to enhance efficiency and reduce manual intervention. Research and implement new DevOps technologies Work closely with the Global DevOps and dedicated DevOps teams Suggest and implement best practices and architecture improvements Must haves: Proven ability to troubleshoot complex systems issues and implement effective solutions. Demonstrable DevOps experience Experience with programming or scripting language such as Python, Bash, Java Experience working in a Linux and in a cloud environment Minimum 2+ years with Apache Cassandra 4.1 Administration/Architecture. Proficient in implementing Infrastructure as a code using tools like terraform and cloud formation Experience with Infrastructure as code, containerization such as K8s/Docker, logging, monitoring and alerting, and CI/CD tools Willingness to learn above mentioned qualifications if some are missing Minimum 3+ years provisioning, operating, and managing AWS environments Proficient with AWS services Familiarity interacting with AWS APIs AWS Disaster Recovery design and deployment across regions a plus Desired Skills / Good to have: Experience with multi-tier architectures: load balancers, caching, web servers, application servers, databases, and networking Experience in automation and testing via scripting & programming (PowerShell, Jenkins, Python, Ruby, Java) Experience with working with geographically dispersed teams Experience with secrets management tools like Hashicorp Vault Professional certifications such as AWS certified Devops Engineer, Kubernetes administrator etc Demonstrated experience leading devops teams or projects, fostering collaboration. A personal technical blog A personal (Git) repository of side projects Participation in an open-source community Qualifications: B.E / B.Tech / BCA/ MCA in Computer Science, Engineering or a related subject. Strong Computer Science fundamentals. Comfortable with version control systems such as git. A thirst for learning new Tech and keeping up with industry advances. Excellent communication and knowledge-sharing skills. Comfortable working with technical and non-technical teams. Strong debugging skills. Comfortable providing and receiving code review feedback. Smart+Exceptional written and verbal communication skills A positive attitude, adaptability, enthusiasm, and a growth mindset.
Posted 1 day ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Company Description Opalina Technologies is a hub for technology enthusiasts where innovation and collaboration meet. We are dedicated to delivering class and uniqueness in every project. At Opalina, we believe in pushing the boundaries of technology with a focus on both present and future advancements. Role Description This is a full-time remote role for an AWS Architect with over 10 years of experience. The AWS Architect will be responsible for designing and implementing scalable, reliable, and secure solutions using AWS services. You will work on media processing, audio/video streaming, and transcoding projects, as well as storage management. Tasks include leveraging AWS tools such as Python, FastAPI, Golang, DynamoDB, Lambda, ECS, and EKS to develop and maintain robust architectures. Immediate joiners are encouraged to apply. Required Skills: - Proven experience as an AWS Architect in large-scale systems Proficient in Python (FastAPI) and/or Golang Hands-on with AWS services: Lambda, DynamoDB, ECS, EKS, S3, CloudFront Experience with media processing , audio/video streaming protocols , and transcoding tools (e.g., FFmpeg, AWS MediaConvert) Strong understanding of cloud security, scalability, and performance optimization Ability to join immediately or within a short notice period Qualifications Experience in Solution Architecture and Integration Proficiency in Infrastructure design and management Strong background in Software Development using Python, FastAPI, and Golang Expertise in Architecture implementation and AWS services such as DynamoDB, Lambda, ECS, and EKS Experience with media processing, audio/video streaming, and transcoding Excellent problem-solving and analytical skills Strong communication and teamwork abilities Bachelor's degree in Computer Science, Engineering, or related field Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Senior Analyst – Data Engineering Data and Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors The opportunity We are looking for Senior Analyst - Data Engineering. The main purpose of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also onshore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Discipline : Information Management & Analysis Role Type : Data Architecture & Engineering A Data Architect & Engineer at EY: Uses agreed-upon methods, processes and technologies to design, build and operate scalable on-premises or cloud data architecture and modelling solutions that facilitate data storage, integration, management, validation and security, supporting the entire data asset lifecycle. Designs, builds and operates data integration solutions that optimize data flows by consolidating disparate data from multiple sources into a single solution. Works with other Information Management & Analysis professionals, the program team, management and stakeholders to design and build analytics solutions in a way that will deliver business value. Skills Cloud Computing, Business Requirements Definition, Analysis and Mapping, Data Modelling, Data Fabric, Data Integration, Data Quality, Database Management, Semantic Layer Effective Client Communication, Problem solving / critical thinking, Interest and passion for Technology, Analytical Thinking, Collaboration Your Key Responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack, including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas steered tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical guidance to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud (MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills And Attributes For Success Minimum of 4 years of total experience with Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas, data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [Relational, NoSQL, Datawarehouse] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory, Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modelling and integration experience with relational data sources, such as SQL Server databases, Oracle/MySQL, Azure SQL and Azure Synapse Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development life cycle (SDLC) and rules of product development, such as installation, upgrade and namespace management Solid thoughtfulness, technical and problem solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact steered and thoughtfulness with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 4 to 6 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-prominent, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with prominent businesses across a range of industries What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
We are 3PILLAR GLOBAL We build breakthrough software products that power digital businesses. We are an innovative product development partner whose solutions drive rapid revenue, market share, and customer growth for industry leaders in Software and SaaS, Media and Publishing, Information Services, and Retail. Our key differentiator is our Product Mindset. Our development teams focus on building for outcomes and all of our team members around the globe are trained on the Product Mindset’s core values – Minimize Time to Value, Solve For Need, and Excel at Change. Our teams apply this mindset to build digital products that are customer-facing and revenue-generating. Our business-minded approach to agile development ensures that we align to client goals from the earliest conceptual stages through market launch and beyond. In 2024, 3Pillar Global India was named a “Great Place to Work” for the seventh year in a row based on how our employees feel about our company, collaborative culture, and work/life balance - come join our growing team Key Responsibilities Providing L2 & L3 Support around Tickets reported from Production Environments Monitor, analyze, and troubleshoot ETL/data pipelines across Data Lakes and distributed systems Conduct in-depth root cause analysis using SQL queries, system logs, and monitoring tools Support microservices-based applications running in Docker and Kubernetes environments Diagnose and resolve Linux server issues related to disk usage, memory, networking, and permissions Collaborate with DevOps and CloudOps teams on system scaling, performance optimization, and configuration changes Maintain and automate system health using cron jobs, shell scripts, and cloud-native tools Drive end-to-end incident resolution, create detailed RCA reports, and implement preventive measures Work with cross-functional teams to identify long-term solutions and enhance system stability Ensure SLA compliance, maintain accurate documentation, and continuously improve support processes Minimum Qualifications Must Have Minimum 5 years of experience in Technical/Application/Production Support in a fast-paced environment Strong hands-on experience with SQL, Linux, and cloud platforms (preferably AWS) Familiarity with monitoring and log management tools such as Datadog, Sumologic, Zabbix or similar platforms Practical experience with Docker and Kubernetes Good understanding of microservices architecture, APIs, and log debugging Strong analytical and problem-solving skills with keen attention to detail Excellent communication and collaboration skills to work across technical and non-technical teams Benefits A competitive annual salary based on experience and market demands Flexi-timings Work From Anywhere Medical insurance with the option to purchase a premium plan or HSA option for your entire family Regular Health check-up camps arranged by the company Recreational activities (Pool, TT, Wii, PS2) Business casual atmosphere Apply for this job Show more Show less
Posted 1 day ago
4.0 - 6.0 years
10 - 12 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
- Experience with Backend Technologies like ExpressJS, Kubernetes, API Gateway, DynamoDB and other AWS Services - 4 - 5 experience in back-end development, experience with front-end development will be a plus.
Posted 1 day ago
7.0 - 9.0 years
13 - 23 Lacs
Hyderabad
Hybrid
Role & responsibilities Specialization : Must have : Java(8), Springboot(8), Micro services(5) Key Responsibilities : Strong in Java, and some experience of UI technologies and Angular/React JS frameworks Developed software using Java, Spring Technologies (Spring, Spring MVC, Spring Boot), Hibernate, Web Services (RESTful/SOAP); hands on experience in Microservices Have basic web development skills in JavaScript, jQuery, CSS, HTML and Bootstrap Working experience with ORM tools like JPA/Hibernate/Spring Data/MyBatis Strong RDBMS, SQL/PLSQL programming skills Experience in writing unit tests (Junit/Mockito/Easy Mock) etc. Understanding of software design patterns and Java best practices and coding standards Relevant experience in working with cross-cultural teams across multiple locations Hands-on experience with AWS cloud platform Continuous integration experience (Jenkins/JIRA/Maven/Git) Working experience in servers like Tomcat, JBoss, WebSphere, WebLogic etc. Knowledge on Messaging, XML Design, implementation, and fine tuning of large database applications preferably using Oracle/MySQL Knowledge on inspection tools like SonarQube/Sonar Lint Benefits :- Complimentary meals provided Health insurance coverage Rewards and recognition programs Wellness sessions for mental and physical well-being Work with a certified Great Place to Work and CMMI Level company
Posted 1 day ago
7.0 - 12.0 years
18 - 22 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
data scientist engineer,AI/ML,Data collection,Architecture creation,Python,R,Data analysis,Panda, Numpy and Matplot Lib,Git,Tensorflow,Pytorch, Scikit-Learn, Keras,Cloud platform( AWS/ AZure/ GCP), Docker kubernetis, Big Data,Hadoop, Spark,
Posted 1 day ago
8.0 - 13.0 years
10 - 14 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Title: Technical Pre-Sales Consultant Location: Chennai,Hyderabad,Bangalore Experience: 7 - 10 Years Job Summary: We are seeking a dynamic and detail-oriented Pre-Sales Consultant with 8-10 years of experience in a technology-driven environment. The ideal candidate will play a crucial role in understanding customer requirements and designing tailored solutions that effectively demonstrate the value of our offerings. This position bridges the gap between technical and business aspects of the sales cycle, contributing to both client satisfaction and business growth. Mandatory Skills: Strong communication and presentation skills. Proven solution design and architecture capabilities. Ability to translate customer requirements into technical solutions. Familiarity with data engineering and data science concepts. Experience with cloud platforms (AWS, Azure) and data platforms (Snowflake, Databricks, Cloudera). Roles and Responsibilities: Engage with prospective clients to understand their business and technical requirements. Design and present tailored solutions that address customer needs. Collaborate with the sales team to prepare technical proposals, RFPs, and RFIs. Deliver product demonstrations and Proof of Concepts (PoCs). Work closely with Product and Engineering teams to align customer needs with product capabilities. Conduct competitive analysis and positioning. Provide post-sales transition support to ensure a smooth handover. Maintain and update technical documentation. Qualifications: Bachelor s degree in Engineering, Computer Science, or related field Relevant certifications in cloud platforms or sales engineering preferred Technical Skills: Cloud Platforms: Proficient in at least one major cloud provider (AWS, Azure, GCP). Data Platforms: Solid understanding of modern data architectures, data lakes, warehouses, and ETL/ELT frameworks. Analytics Solutions: Familiarity with BI platforms (Power BI, Tableau) and basic AI/ML concepts. Service Provider Background: Prior experience in an IT services, consulting, or system integrator organization is preferred. Consultative Skills Excellent client-facing communication and presentation skills. Ability to articulate complex technical concepts in business-friendly language. Domain Knowledge (Good to Have): Exposure to Banking and Financial Services (FSI) industry projects and regulatory landscapes. Certifications (Preferred but not mandatory) Cloud certifications (e.g., AWS Solutions Architect, Azure Solutions Architect, Google Cloud Professional Architect). Data or Analytics certifications. Soft Skills: Strong interpersonal and client-handling abilities Analytical and problem-solving skills Team collaboration and time management Ability to thrive under pressure Experience in SaaS or cybersecurity domains Knowledge of Agile methodologies Exposure to DevOps tools and practices Minimum 8 years of experience in a technical pre-sales or solutions engineering role Experience in client-facing roles with strong technical articulation Performance-based incentives and bonuses Health insurance for employees and dependents Annual learning & development allowance Work-from-home flexibility (based on project requirements) Travel reimbursement for client visits KRA (Key Result Areas): Timely and effective delivery of proposals and solutions Client satisfaction with solution design and support Accuracy in requirement gathering and solution mapping Contribution to revenue growth through technical excellence KPI (Key Performance Indicators): Number of qualified PoCs delivered Conversion rate of demos to deals Customer feedback and engagement ratings Contact Informations: Click here to upload your CV / Resume We accept PDF, DOC, DOCX, JPG and PNG files SUBMIT APPLICATION Verification code successfully sent to registered email Invalid Verification Code! Thanks for the verification! Our support team will contact you shortly!.
Posted 1 day ago
10.0 - 20.0 years
40 - 45 Lacs
Bengaluru
Remote
Role & responsibilities We are looking for Senior Gen AI Engineer permanent position with MNC company for Remote. Preferred candidate profile Senior Gen AI Engineer GenAI / ML (Python, Langchain) Full Time Focus: Hands-on engineering role focused on designing, building, and deploying Generative AI and LLM-based solutions. The role requires deep technical proficiency in Python and modern LLM frameworks with the ability to contribute to roadmap development and cross-functional collaboration. Key Responsibilities: Design and develop GenAI/LLM-based systems using tools such as Langchain and Retrieval-Augmented Generation (RAG) pipelines. Implement prompt engineering techniques and agent-based frameworks to deliver intelligent, context-aware solutions. Collaborate with the engineering team to shape and drive the technical roadmap for LLM initiatives. Translate business needs into scalable, production-ready AI solutions. Work closely with business SMEs and data teams to ensure alignment of AI models with real-world use cases. Contribute to architecture discussions, code reviews, and performance optimization. Skills Required: Proficient in Python, Langchain, and SQL. Understanding of LLM internals, including prompt tuning, embeddings, vector databases, and agent workflows. Background in machine learning or software engineering with a focus on system-level thinking. Experience working with cloud platforms like AWS, Azure, or GCP. Ability to work independently while collaborating effectively across teams. Excellent communication and stakeholder management skills. Preferred Qualifications: Hands-on experience in LLMs and Generative AI techniques. Experience contributing to ML/AI product pipelines or end-to-end deployments. Familiarity with MLOps and scalable deployment patterns for AI models. Prior exposure to client-facing projects or cross-functional AI teams.
Posted 1 day ago
1.0 - 5.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Required Skills: Amazon Connect: Hands-on experience designing and developing Contact Flows . Strong understanding of Amazon Connect architecture and integration capabilities. AWS Expertise: Proficient with Lambda , EC2 , CloudWatch , and other AWS services. Ability to integrate AWS services effectively into contact center solutions. Python Development: Strong Python coding skills with experience in developing backend services or automation scripts. Familiarity with AWS SDKs (e.g., boto3) is a plus. Communication Skills: Excellent verbal and written communication. Ability to work closely with clients and cross-functional teams.
Posted 1 day ago
5.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth - bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Python Professionals in the following areas : Job Description: This Position will be Right to Hire! Experience 5+ years Degree in computer science, engineering, or similar fields Skill Set: AWS, Python, PySpark Primary Responsibilities Responsible for designing, developing, testing and supporting data pipelines and applications Industrialize data feeds Experience in working with cloud environments AWS Creates data pipelines into existing systems Experience with enforcing security controls and best practices to protect sensitive data within AWS data pipelines, including encryption, access controls, and auditing mechanisms. Improves data cleansing and facilitates connectivity of data and applied technologies between both external and internal data sources. Establishes a continuous quality improvement process and to systematically optimizes data quality Translates data requirements from data users to ingestion activities B.Tech/ B.Sc./M.Sc. in Computer Science or related field and 3+ years of relevant industry experience Interest in solving challenging technical problems Nice to have test driven development and CI/CD workflows Knowledge of version control software such as Git and experience in working with major hosting services (e. g. Azure DevOps, Github, Bitbucket, Gitlab) Nice to have in working with cloud environments such as AWSe especially creating serverless architectures and using infrastructure as code facilities such as CloudFormation/CDK, Terraform, ARM. Hands-on experience in working with various frontend and backend languages (e.g., Python, R, Java, Scala, C/C++, Rust, Typescript, ...) At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture
Posted 1 day ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
OMS L3 Support Engineer Skills: Hands on exp in L3 Support OMS with coding experience in Jav, Spring , Microservice . Exp: 5-10 years 5 + Years of experience with OMS & (eCommerce and/or OMS) domain . Should have good End to End knowledge of various Commerce subsystems which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. Must have a working knowledge of Production Application Support. Should work closely with counterparts in L1/L2 teams to monitor, analyze and expedite issue resolutions, reduce stereoptypes, automate SoPs or find avenues for the same by being proactive. Extensive backend development knowledge with core Java/J2EE and Microservice based event driven architecture with a cloud based architecture (preferably AWS) . Experience in Service Oriented Architecture - Developing/securely exposing/consuming Web Services RESTful and integrating headless applications. Should be able to understand system end-end, maintain application and troubleshoot issues. . Should have understanding of building, deploying and maintaining server based as well as serverless applications on cloud, preferably AWS. Expertise in integrating synchronously and asynchronously with third party web services. Good to have concrete knowledge of AWS Lambda functions, API Gateway, AWS . CloudWatch, SQS, SNS, Event bridge, Kinesis, Secret Manager, S3 storage, server architectural models etc. Good knowledge of observability tools like NewRelic , DataDog, Graphana, Splunk etc. with knowledge on configuring new reports, proactive alert settings, monitoring KPIs etc.
Posted 1 day ago
3.0 - 5.0 years
12 - 14 Lacs
Chennai
Work from Office
CBTS serves enterprise and midmarket clients in all industries across the United States and Canada. CBTS combines deep technical expertise with a full suite of flexible technology solutions--including Application Modernization, Managed Hybrid Cloud, Cybersecurity, Unified Communications, and Infrastructure solutions. From developing and deploying modern applications and the secure, scalable platforms on which they run, to managing, monitoring, and optimizing their operations, CBTS delivers comprehensive technology solutions for its clients transformative business initiatives. For more information, please visit www.cbts.com . OnX is a leading technology solution provider that serves businesses, healthcare organizations, and government agencies across Canada. OnX combines deep technical expertise with a full suite of flexible technology solutions including Generative AI, Application Modernization, Managed Hybrid Cloud, Cybersecurity, Unified Communications, and Infrastructure solutions. From developing and deploying modern applications and the secure, scalable platforms on which they run, to managing, monitoring, and optimizing their operations, OnX delivers comprehensive technology solutions for its clients transformative business initiatives. For more information, please visit www.onx.com . Cloud Engineer II Job Purpose: Design, develop, and maintain scalable data pipelines using AWS Data Engineering tools (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Implement ETL processes to extract, transform, and load data from various sources into data warehouses and data lakes. Essential Functions: Develop and manage event streaming solutions using tools such as Apache Kafka, AWS Kinesis, or similar technologies. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs. Optimize and troubleshoot data pipelines to ensure high performance and reliability. Monitor and maintain data quality and integrity across all data systems. Stay up-to-date with the latest trends and best practices in data engineering and AWS technologies. Work within Agile Scrum and XP methodologies, participating in sprint planning, daily stand-ups, and retrospectives to ensure timely delivery of projects. Apply Test Driven Development (TDD) practices to ensure high-quality code and robust data solutions. Education Bachelors degree in Computer Science, Information Technology, or a related field. Certifications, Accreditations, Licenses Certification in AWS Data Engineering or related fields. Experience 6+ years of related experience Special Knowledge, Skills And Abilities 3-5 years of relevant experience in data engineering, with a focus on AWS Data Engineering tools. Proficiency in ETL processes and tools (e.g., AWS Glue, PySpark, Apache Airflow). Experience with event streaming technologies (e.g., Apache Kafka, AWS Kinesis). Strong programming skills in languages such as Python, Java, or Scala. Familiarity with SQL and database management systems (e.g., PostgreSQL, MySQL). Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication skills and the ability to convey complex technical concepts to non-technical stakeholders. Experience working in Agile Scrum and XP methodologies. Experience with Test Driven Development (TDD). Experience with data warehousing solutions (e.g., Amazon Redshift, Snowflake). Knowledge of data governance and security best practices. Nice to Have Skills : Experience with MS Azure and MS Fabric Apply n
Posted 1 day ago
4.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
The Role: Experian ECS are building a New Growth domain to help us meet a wider range of consumers financial needs throughout their financial lives. To reach our strategic ambition we must expand our offerings to areas most aligned with what our consumers want. As a Engineer in the New Growth team, you will be responsible for developing the features and core services that power the applications and solutions our customers rely on. Working closely with other Developers, QA engineers, Architects and Product Owners you will grow to understand the domain before bringing your own ideas to solve real business problems. Responsibilities: As a member of our agile team, you ll have a passion for building and shipping high-performance, robust, and efficient AWS based services that you can be proud of. Youll be responsible for feature delivery for our New Growth initiative. Design, develop, and maintain robust applications using .Net and React. Utilize strong analytical skills to solve complex technical problems. Collaborate with cross-functional teams to deliver high-quality software solutions. Develop and maintain full-stack applications, ensuring seamless integration and functionality. Implement unit testing and acceptance test automation to ensure software reliability. Work with the existing CI/CD pipeline and support the team with this process. Stay updated with modern technologies and best practices to continuously improve development processes. Mentor junior engineers and provide technical leadership. Lead architectural design and decision-making processes to ensure scalable and efficient solutions. Define and enforce best practices for software development and architecture. Evaluate and integrate new technologies to enhance system capabilities and performance. About Experian Experian Consumer Services (ECS) is looking for Senior Full Stack Engineers in Hyderabad, India to work alongside our UK colleagues to deliver business outcomes for the UK&I region. Background: This is an incredibly exciting time for the Experian UKI Region, as we look to build our presence out in Hyderabad and embark on a technology transformation programme to meet our global aspiration to significantly scale our business over the next five years. This an opportunity to join us on this journey and be part of a collaborative team that uses Agile principles to deliver business value. Our unique culture and agile ways of working offer a great opportunity to those seeking to join a talented set of diverse problem solvers to design, build and maintain our products. We pride ourselves in excellence, adopting best practices and holding ourselves to the highest standards. Experience and Skills 4 -7 years of experience in software development, with extensive expertise in .Net and React. Good knowledge of microservice architecture delivered on .NET Core, Node.js & React, hosted using AWS technologies such as CloudFront, S3, Fargate, EC2, Lambda, SNS, SQS & DynamoDB Experience of developing outstanding Flutter applications for iOS and Android Good with feedback, continually looking to improve and develop. Strong knowledge of algorithms, data structures, and software analytics. Excellent communication skills and ability to work in a fast-paced environment. AWS certification is preferred. Familiarity with full-stack development and unit testing. Experience with acceptance test automation. Quick learner with the ability to adapt to new technologies. We expect you to have good experience in software engineering, with a proven track record of building mission-critical, high-volume transaction web-based software systems. Additional Information Our uniqueness is that we celebrate yours. Experians culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experians people first approach is award-winning; Worlds Best Workplaces 2024 (Fortune Top 25), Great Place To Work in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experians DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
With the increasing demand for cloud services and infrastructure, the job market for AWS professionals in India is booming. Companies of all sizes are looking to leverage AWS services for their businesses, leading to a high demand for skilled professionals in this domain.
The average salary range for AWS professionals in India varies based on experience and expertise. Entry-level positions can expect to earn around ₹6-8 lakhs per annum, while experienced professionals can earn upwards of ₹15 lakhs per annum.
In the AWS job market in India, a typical career path may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and Architect. With experience and certifications, professionals can progress to higher roles with more responsibilities and higher pay scales.
In addition to AWS expertise, professionals in this field are often expected to have skills in areas such as: - DevOps - Linux/Unix systems administration - Scripting languages (Python, Shell) - Networking concepts - Security best practices
As you explore AWS job opportunities in India, remember to showcase your skills and experience confidently during interviews. Prepare thoroughly, stay updated with the latest technologies, and demonstrate your passion for cloud computing to land your dream job in the AWS domain. Good luck! 🚀
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.