Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary ¿ Develop and maintain CI/CD pipelines using GitHub Actions to streamline the software development lifecycle. ¿ Design, deploy, and manage AWS infrastructure, ensuring high availability and security. ¿ Implement and manage Helm Charts for Kubernetes to automate the deployment of applications. ¿ Utilize YAML configuration files for defining and managing infrastructure and application settings. ¿ Apply SRE principles to enhance system reliability, performance, and capacity through automation and monitoring. ¿ Collaborate with development teams to integrate reliability and scalability into the software development process. ¿ Monitor application and infrastructure performance, troubleshoot issues, and implement solutions to improve system reliability. ¿ Implement infrastructure as code (IaC) using tools like Terraform for efficient resource management. Required Skills and Qualifications ¿ Proven experience in Site Reliability Engineering (SRE) practices. ¿ Strong expertise in GitHub Actions and Terraform for CI/CD pipeline development. ¿ Strong knowledge of YAML, its code structures, parameterization for configuration management. ¿ Working experience with AWS services, including EC2, S3, Lambda, RDS, and VPC. Deeper understanding of authentication, security, scalability, parallelization of GitHub Actions/Jobs across the CICD process. ¿ Working experience in Helm Charts for Kubernetes deployment and management. ¿ Proficiency in scripting and automation using languages such as Python or PowerShell. ¿ Understanding of containerization technologies like Docker and orchestration with Kubernetes. ¿ Excellent problem solving skills and ability to work collaboratively in a fast paced environment. ¿ Strong communication and collaboration skills.
Posted 2 days ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking a highly experienced C++ Dev Lead to create scalable, dynamic, highly interactive, and user-friendly software solutions. Your role will involve contributing to the development of our innovative clinical development product suite, which assists customers in designing, modeling, and simulating complex clinical trials. This suite aims to enhance the success rate of trials and expedite time to market, providing significant value to sponsors and patients. A crucial element of our software products is the proprietary engine components that implement advanced statistical and mathematical algorithms, such as simulations of complex adaptive trials, for both cloud-hosted and on-premise solutions. As an Engine Dev Lead, your responsibilities will include developing engines in languages like C++, R, or Python, managing a cross-functional team throughout the software development life cycle, and leading junior developers. You will collaborate with various teams to ensure the successful implementation of statistical and mathematical algorithms, design backend computational modules, maintain code quality, and lead a Scrum project team to deliver projects efficiently. Key Responsibilities: - Implement statistical/mathematical algorithms in C++ for on-premises or cloud-hosted applications - Collaborate with developers, architects, UX designers, and product managers to ideate software solutions - Design backend computational modules, maintain design artifacts, and ensure testability - Maintain code quality and high performance through regular code reviews and refactoring - Conduct automated unit testing, follow coding guidelines, and participate in design discussions - Lead a Scrum project team, guide junior developers, and ensure adherence to SDLC processes - Monitor and manage project risks, make technical presentations, and stay updated on industry trends Qualifications: - Minimum 10 years of C++ programming experience with strong knowledge of OOAD principles - Proficiency in Applied Mathematics, algorithmic computing, and strong analytical skills - Experience in developing high-performance applications and familiarity with Agile frameworks - Strong communication skills, self-driven with problem-solving abilities, and a degree in Computer Science or related field Optional Skills: - Familiarity with Jira, Confluence, Python, R, C#.NET, and cloud platforms - Experience in scientific graphics, microservices, REST APIs, and databases - Knowledge of statistical/scientific software and versioning tools If you meet the qualifications and are excited to lead a team in developing cutting-edge software solutions, we encourage you to apply.,
Posted 2 days ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
10.0 years
1 - 10 Lacs
Hyderābād
On-site
Storable is looking for a Software Architect who is passionate about designing and building high quality software and playing a crucial role in evolving the Architecture to deliver on our vision of a unified platform. How is this different from other software engineering jobs? As the leading provider of technology solutions in the self-storage industry, we are creating innovative experiences for our customers that help them run better businesses than ever before. You should be passionate about building the right solutions and be comfortable working in an open-ended, dynamic environment where roles are flexible and communication is essential. This is a great opportunity for you to join a team that encourages innovation, creativity, teamwork, professional growth, and advancement. What you'll be doing You will be responsible for outlining the architectural and technical direction of Storable's consumer-facing products As a member of the Architecture team, you will work with other Architects to ensure that technology solutions align with long-term product, security, and enterprise visions and roadmaps You will lead cross-product discussions and drive alignment on technology strategy You will drive innovation around engineering principles and practices, identifying improvements and ideas, and driving their adoption across the organization You will evaluate technical issues and initiatives, work with teams to recommend solutions that align with the Architectural direction, and guide teams during implementation You will collaborate with product and engineering team leads to translate business objectives into clear problem statements and system requirements You embrace ambiguity and work towards bringing clarity to the roadmap in collaboration with the Consumer Group Product and Engineering Managers You will define standard software engineering/architecture practices at an Organisational level You must be a strong team leader. You enjoy mentoring and collaboration across engineering and product teams What you need to bring to the table Strong hands-on coding experience with 10+ years in consumer web development technologies and architectures Track record for effectively aligning large organizations on long-term technical visions and for leading teams of senior engineers to successfully execute a multi-quarter technical vision Familiarity with architectural patterns of highly scalable enterprise service design, including monitoring, analytics, and reliability engineering Strong understanding of cloud technologies (AWS, Kubernetes) and passion for adopting DevOps practices (CI/CD, Infrastructure as Code) Extensive experience designing and developing React and NodeJS applications acrossmultiple development teams Exposure to microservice & asynchronous event-based architectures Experience designing and maintaining globally distributed, large-scale systems Experience with server-side technologies and track record of identifying performance bottlenecks and outlining remedies (caching, asynchronous processing, parallelization techniques) Deep understanding of REST API design principles and practices Experience with cloud configuration and deployment technologies Experience with Agile software development practices
Posted 1 week ago
3.0 years
0 Lacs
Hyderābād
On-site
Software Engineer II Storable is looking for a Software Engineer II who is passionate about building high quality software and is excited to contribute to our Payments platform. How is this different from other software engineering jobs? As the leading provider of technology solutions in the self-storage industry, we are creating innovative experiences for our customers that help them run better businesses than ever before. You should be passionate about building the right solutions and be comfortable working in an open-ended, dynamic environment where roles are flexible and communication is essential. This is a great opportunity for you to join a team that encourages innovation, creativity, teamwork, professional growth, and advancement. What you'll be doing Aid in the designing and development of a payments platform. Adapting to new technologies and technical direction to empower the team more. Collaborating with your peers on the scrum team to create the best solutions possible. You will mentor more junior members of the team. Ensure code coverage to ensure quality of code delivery. Always be looking for process improvement to improve delivery. Working in a highly productive product team with big goals and awesome potential. What you need to bring to the table Hands-on coding experience with 3+ years on Node JS backend API development using Nest JS or Express JS framework and overall industry experience of 3-5 yrs. Good understanding of Javascript fundamentals 1+ years of experience in React JS is good to have Willingness to adapt tech stack based on market needs. Exposure to architectural patterns of highly scalable enterprise service design, including monitoring, analytics, and entity CRUD use cases. Experience with relational and/or non-relational database. Understanding of RESTful web service architecture, design, and implementation. Ability to work under minimal supervision and/or direction. Experience with Agile software development. What would be nice to bring along Exposure to the payments industry or with payment processing. Experience using public cloud (AWS), docker containers, Kubernetes, CI/CD. Exposure to micro services & asynchronous event-based architecture. Experience with server-side technologies including caching, asynchronous processing, and parallelization techniques. Experience with cloud configuration and deployment technologies.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Software Engineer II Storable is looking for a Software Engineer II who is passionate about building high quality software and is excited to contribute to our Payments platform. How is this different from other software engineering jobs? As the leading provider of technology solutions in the self-storage industry, we are creating innovative experiences for our customers that help them run better businesses than ever before. You should be passionate about building the right solutions and be comfortable working in an open-ended, dynamic environment where roles are flexible and communication is essential. This is a great opportunity for you to join a team that encourages innovation, creativity, teamwork, professional growth, and advancement. What you’ll be doing Aid in the designing and development of a payments platform. Adapting to new technologies and technical direction to empower the team more. ● Collaborating with your peers on the scrum team to create the best solutions possible. You will mentor more junior members of the team. Ensure code coverage to ensure quality of code delivery. Always be looking for process improvement to improve delivery. ● Working in a highly productive product team with big goals and awesome potential. What you need to bring to the table Hands-on coding experience with 3+ years on Node JS backend API development using Nest JS or Express JS framework and overall industry experience of 3-5 yrs. Good understanding of Javascript fundamentals 1+ years of experience in React JS is good to have Willingness to adapt tech stack based on market needs. Exposure to architectural patterns of highly scalable enterprise service design, including monitoring, analytics, and entity CRUD use cases. Experience with relational and/or non-relational database. Understanding of RESTful web service architecture, design, and implementation. ● Ability to work under minimal supervision and/or direction. Experience with Agile software development. What would be nice to bring along Exposure to the payments industry or with payment processing. ● Experience using public cloud (AWS), docker containers, Kubernetes, CI/CD. ● Exposure to micro services & asynchronous event-based architecture. ● Experience with server-side technologies including caching, asynchronous processing, and parallelization techniques. Experience with cloud configuration and deployment technologies.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
bareilly, uttar pradesh
On-site
As a Software Engineer at our company based in Bareilly, UP, you will have the opportunity to unlock your full potential in a supportive work environment. We believe in celebrating every achievement and viewing challenges as opportunities for personal and professional growth. We are looking for individuals with a B. Tech/BS degree and a strong foundation in Python, C++, AI, and Java Script. Your role will involve utilizing your expertise in Python and C++, including object-oriented programming skills. Experience with the Linux operating system, data structures, and algorithms will be beneficial. In this position, you will be expected to write scalable and elegant code, while also demonstrating proficiency in git versioning, software licensing, and the complete software development cycle. An understanding of high-performance computing, parallelization on CPUs and GPUs, and the ability to use Python libraries for GUI development will be advantageous. If you are passionate about software engineering and possess the desired qualifications and skills, we encourage you to join our team and contribute to our innovative projects. For further queries, please reach out to us at careers@paanduv.com or contact us at 8218317925.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You should have a Bachelor's degree in Computer Science, Electrical Engineering or equivalent practical experience, along with 8 years of experience with compilers (e.g., optimization, parallelization, etc.) and familiarity with Multi-Level Intermediate Representation (MLIR) or Low Level Virtual Machines (LLVM). A Master's degree or PhD in Computer Science or a related field would be preferred. It would be advantageous to have experience in compiling for architectures across Internet protocols (IPs) like Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Neural Processing Unit (NPUs), as well as experience in executing programs or several projects. Additionally, experience with compiler development for accelerator-based architectures is desired. As a software engineer at Google, you will be working on cutting-edge technologies that impact billions of users worldwide. The projects you work on will involve handling massive amounts of information beyond web search and will require expertise in information retrieval, distributed computing, system design, networking, security, artificial intelligence, and more. Versatility, leadership qualities, and a passion for tackling new challenges are essential qualities for this role. The compiler team at Google is responsible for analyzing, optimizing, and compiling machine learning models to further Google's mission of organizing information and making it universally accessible and useful. Combining AI, software, and hardware expertise, the team aims to create innovative technologies that enhance computing speed, seamlessness, and power to improve people's lives. As part of the Edge Tensor Processing Unit (TPU) compiler team, your responsibilities will include analyzing and enhancing compiler quality and performance, developing algorithms for optimization, parallelization, and scheduling to optimize compute and data movement costs for Machine Learning (ML) workloads on the Edge TPU, collaborating with Edge TPU architects on designing future accelerators and hardware/software interface, mapping AI models and other workloads into Edge TPU instructions through the compiler, and managing a team of compiler engineers.,
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Computer Science, Electrical Engineering or equivalent practical experience. 8 years of experience with compilers (e.g., optimization, parallelization, etc.) or ML (optimization, SDK etc.). 3 years of experience in managing a team. Preferred qualifications: Master's degree or PhD in Computer Science or a related field. Experience optimizing ML models for inference. Experience developing developer tools and Software Development Kits (SDKs) in the context of accelerator-based architectures. Experience compiling for heterogeneous architectures across Internet Protocol (IPs), including but not limited to Central Processing Unit (CPU), Graphics Processing Unit (GPU) and Neural Processing Unit (NPUs). About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will join the Developer Experience team to enhance the developer journey across all EdgeTPU (Tensor Processing Unit) offerings, which include embedded ML accelerators powering applications from smartphones from to data centers. Google's mission is to organize the world's information and make it universally accessible and useful. Our team combines the best of Google AI, Software, and Hardware to create radically helpful experiences. We research, design, and develop new technologies and hardware to make computing faster, seamless, and more powerful. We aim to make people's lives better through technology. Responsibilities Analyze and improve developer tools quality and performance on optimization decisions, correctness and performance as part of the EdgeTPU developer experience team. Design end-to-end customer user journeys to execute Machine Learning (ML) workloads on the EdgeTPU in a seamless manner. Collaborate with cross-functional teams to develop efficient workflows that deliver an excellent user experience. Partner with ML model developers, researchers, and EdgeTPU hardware/software teams to accelerate the transition from research ideas to exceptional user experiences on the EdgeTPU. Lead a team of engineers. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 2 weeks ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior QA Automation Engineer (E2E Testing – Cypress, Playwright, Selenium) Experience Required: 5+ Years Location: Remote Job Type: Freelancer for a Project in Spain (need to work closely with Spain Team) About the Role: We are seeking a highly skilled Senior QA Engineer with deep expertise in end-to-end (E2E) automated testing for modern web applications built with React and Next.js. This role focuses exclusively on test automation using Playwright, Cypress, and cutting-edge AI-powered QA tools. You will play a key role in building robust testing infrastructure, accelerating feedback loops, and improving test coverage across complex, dynamic frontend architectures. If you're passionate about automation, love exploring the latest in AI-enabled testing, and thrive in fast-paced, tech-forward environments, we want to hear from you. Key Responsibilities Build and maintain high-quality E2E automated test suites using Playwright and Cypress tailored for React/Next.js applications. Explore, evaluate, and implement AI-driven testing tools (e.g., QA. tech or similar) to reduce manual test creation and maintenance. Develop test strategies optimized for SSR/CSR architectures, dynamic routing, and API integrations in modern frontend stacks. Collaborate closely with Developers, Designers, and DevOps to enforce best testing practices and ensure efficient CI/CD workflows. Investigate and report issues with detailed bug documentation and feedback loops to dev teams. Optimize test reliability and execution speed by minimizing test flakiness and leveraging parallelization and cloud testing platforms. Stay ahead of the curve with emerging trends in AI-assisted QA automation, and proactively integrate advancements into the workflow. Required Skills & Qualifications: Proven experience in Playwright: setup, advanced scripting, assertions, and Next.js-specific test optimization. Strong proficiency in Cypress: component and E2E testing, and seamless integration with React-based projects. Solid JavaScript/TypeScript programming skills and familiarity with modern frontend ecosystems. Familiarity with AI-based testing platforms and concepts: self-healing tests, adaptive test generation, and visual regression powered by AI agents. Hands-on experience with CI/CD pipelines (e.g., GitHub Actions, Vercel) and test automation integration. Strong debugging and problem-solving abilities, including log analysis and cross-browser testing. Ability to interpret product requirements, user flows, and edge cases into actionable test scenarios. Preferred Experience: Direct experience with AI-powered E2E platforms (e.g., QA. tech, Posium.ai, Sendbird). Awareness of the evolving React/Next.js testing ecosystem, including SSR/ISR testing strategies. Exposure to AI agents for auto-generating test scripts, performing test execution, and analyzing test failures through clustering or LLM-based summarization. Contributions to open-source QA/testing tools or frameworks is a plus. If interested with the freelancing opportunity, please share your updated resume to sridevi.k@hexad.in
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location Bengaluru, Karnataka, India Job ID R-231679 Date posted 17/07/2025 Job Title: Senior MLOps Engineer Introduction to role: Are you ready to lead the charge in transforming machine learning operations? As a Senior MLOps Engineer at Alexion, you'll report directly to the IT Director of Insights and Analytics, playing a pivotal role in our IT RDU organization. Your mission? To develop and implement brand new machine learning solutions that propel our business forward. With your expertise, you'll design, build, and deploy production-ready models at scale, ensuring they meet the highest standards. Accountabilities: Lead the development and implementation of MLOps infrastructure and tools for machine learning models. Collaborate with multi-functional teams to identify, prioritize, and solve business problems using machine learning techniques. Design, develop, and implement production-grade machine learning models that meet business requirements. Oversee the training, testing, and validation of machine learning models. Ensure that machine learning models meet high-quality standards, including scalability, maintainability, and performance. Design and implement efficient development environments and processes for ML applications. Coordinate with partners and senior management to communicate updates on the progress of machine learning projects. Develop assets, accelerators, and thought capital for your practice by providing best-in-class frameworks and reusable components. Develop and maintain MLOps pipelines to automate machine learning workflows and integrate them with existing IT systems. Integrate Generative AI models-based solutions within the broader machine learning ecosystem, ensuring they adhere to ethical guidelines and serve intended business purposes. Implement robust monitoring and governance mechanisms for Generative AI models-based solutions to ensure they evolve in alignment with business needs and regulatory standards. Essential Skills/Experience: Bachelor's degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field. 4+ years of experience in developing and deploying machine learning models in production environments. Hands-on experience building production models with a focus on data science operations including serverless architectures, Kubernetes, Docker/containerization, and model upkeep and maintenance. Familiarity with API-based application architecture and API frameworks. Experience with CICD orchestration frameworks, such as GitHub Actions, Jenkins or Bitbucket pipelines. Deep understanding of software development lifecycle and maintenance. Extensive experience with one or more orchestration tools (e.g., Airflow, Flyte, Kubeflow). Experience working with MLOps tools like experiment tracking, model registry tools, and feature stores (e.g., MLFlow, Sagemaker, Azure). Strong programming skills in Python and experience with libraries such as Tensorflow, Keras, or PyTorch. Proficiency in MLOps standard methodologies, including model training, testing, deployment, and monitoring. Experience with cloud computing platforms, such as AWS, Azure or GCP. Proficient in standard processes within software engineering and agile methodologies. Strong understanding of data structures, algorithms, and machine learning techniques. Excellent communication and collaboration skills with the ability to work in a multi-functional team setting. Ability to work independently and hard-working, with strong problem-solving skills. Excellent communication and collaboration skills with the ability to partner well with business stakeholders. Desirable Skills/Experience: Experience in the pharmaceutical industry or related fields. Advanced degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field. Strong understanding of parallelization and asynchronous computation. Strong knowledge of data science techniques and tools, including statistical analysis, data visualization, and SQL. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, you'll find yourself at the forefront of biomedical science. Our commitment to transparency and ethics drives us to push boundaries and translate complex biology into transformative medicines. With global reach and potent capabilities, we're shaping the future of rare disease treatment. Here, you'll grow in an energizing culture that values innovation and connection. Empowered by tailored development programs, you'll align your growth with our mission to make a difference for underserved patients worldwide. Ready to make an impact? Apply now to join our team! Date Posted 18-Jul-2025 Closing Date 30-Jul-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us We are the independent expert in assurance and risk management. Driven by our purpose, to safeguard life, property, and the environment, we empower our customers and their stakeholders with facts and reliable insights so that critical decisions can be made with confidence. As a trusted voice for many of the world’s most successful organizations, we use our knowledge to advance safety and performance, set industry benchmarks, and inspire and invent solutions to tackle global transformations. About Digital Solutions We provide engineering software tools and enterprise solutions for managing risk to improve safety and performance across industries, including the maritime, energy, and healthcare sectors. Research, development, implementations and partnerships with our customers have earned us the position as a trusted third-party vendor of software and services. We are accelerating the pace of transition toward the digitalization of systems and software-as-a-service (SaaS) solutions to give customers the efficiency and flexibility of the cloud, including the power and insights from advanced analytics. About The Role Would you like to be part of a dedicated team that develops software solutions for strength assessment of offshore and maritime structures? Software Engineering (SWE) Renewables and Ocean Structures (ROS) is seeking a senior developer with experience in developing finite element (FE) solvers using programming languages like C++, C# or Fortran. You will be part of our Strength Assessment team and play a key role in the development of DNV Sesam software (https://www.dnv.com/sesam). DNV Sesam software is a global market leader in the maritime and oil & gas industries. With the world transforming towards renewable energy, Sesam is also becoming key for the design and operation of fixed and floating offshore wind turbine (OWT) structures. Your primary responsibility will be to enhance and renew our FE software as part of the Strength Assessment team. The Strength Assessment team is responsible for the Sesam FE solver and associated FE tools. The team is also responsible for the development of Sesam modules for code-checking (ULS and FLS) of beam and plate-type structures according to standards like Eurocode, ISO, DNV and API. SWE ROS consists of 80+ dedicated developers and engineers in multiple locations such as Oslo, Bristol, Shanghai, Gdynia and we are expanding to Pune, India. SWE ROS is responsible for software development, software technology, architecture, testing and operations of advanced engineering software like Sesam, Bladed, WindFarmer and SolarFarmer. You will engage with domain experts, professional software engineers, software testers, user experience designers, product managers and technical support engineers who pride themselves in delivering high-quality software. You will get the opportunity to solve hard and interesting problems. Main Responsibilities Development of our Sesam FE solvers and associated tools used for strength assessment of maritime, offshore and renewable structures Improve computational performance of Sesam FE solvers and tools through profiling, code and algorithm optimization, and parallelization. Understand customer needs for improving the performance and user experience of Sesam software engineering workflows Together with the Strength Assessment team, responsible for enhancing and renewing the architecture of Sesam strength assessment software Contribute to the development of Sesam strength assessment tools to support standards like Eurocode, ISO, DNV, API covering both frame and plate-type structures What we offer Flexible work arrangements for better work-life balance . Generous Paid Leaves (Annual, Sick, Compassionate, Local Public, Marriage, Maternity, Paternity, Medical leave). Medical benefits ( Insurance and Annual Health Check-up). Pension and Insurance Policies (Group Term Life Insurance, Group Personal Accident Insurance, Travel Insurance). Training and Development Assistance (Training Sponsorship, On-The-Job Training, Training Programme) . Additional Benefits (Long Service Awards, Mobile Phone Reimbursement). Company bonus/Profit share. Competitive remuneration. Hybrid workplace model. A culture of continuous learning to aid progression. Personal Growth opportunity using our 70-20-10 philosophy: 70% learning on the job, 20% coaching and 10% training. *Benefits may vary based on position, tenure/contract/grade level* Equal Opportunity Statement DNV is an Equal Opportunity Employer and gives consideration for employment to qualified applicants without regard to gender, religion, race, national or ethnic origin, cultural background, social group, disability, sexual orientation, gender identity, marital status, age or political opinion. Diversity is fundamental to our culture, and we invite you to be part of this diversity! About You MSc or higher within Structural Engineering, Naval Architect, Maritime Engineering, Civil Engineer, Mathematics, Physics or similar 5-10 years of experience as software developer of CAE software with focus on finite element solvers Strong knowledge of numerical methods, algorithms, and data structures relevant for FE analysis, including mesh generation, solver techniques, and post-processing Experience with programming languages, frameworks, and tools such as C#, C++, or Fortran Familiarity with object-oriented practices is required Experience with CI/CD. We use Azure DevOps Experience in software development practices, including version control (we use Git), software testing (unit and regression), and debugging Experience with Visual Studio is a plus.
Posted 3 weeks ago
8.0 - 11.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 3 weeks ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
JOB DESCRIPTION Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs RESPONSIBILITIES Roles & responsibilities Here are some of the key responsibilities of Sr Generative AI Engineer : Research and Development: Conduct original research on generative AI models, focusing on model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. Maintain a strong publication record in top-tier conferences and journals, showcasing contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). Multimodal Model Development: Design and experiment with multimodal generative models that integrate various data types, including text, images, and other modalities to enhance AI capabilities. Agentic AI Systems: Develop and design autonomous AI systems that exhibit agentic behavior, capable of making independent decisions and adapting to dynamic environments. Model Development and Implementation: Lead the design, development, and implementation of generative AI models and systems, ensuring a deep understanding of the problem domain. Select suitable models, train them on large datasets, fine-tune hyperparameters, and optimize overall performance. Algorithm Optimization: Optimize generative AI algorithms to enhance their efficiency, scalability, and computational performance through techniques such as parallelization, distributed computing, and hardware acceleration, maximizing the capabilities of modern computing architectures. Data Preprocessing and Feature Engineering: Manage large datasets by performing data preprocessing and feature engineering to extract critical information for generative AI models. This includes tasks such as data cleaning, normalization, dimensionality reduction, and feature selection. Model Evaluation and Validation: Evaluate the performance of generative AI models using relevant metrics and validation techniques. Conduct experiments, analyze results, and iteratively refine models to meet desired performance benchmarks. Technical Leadership: Provide technical leadership and mentorship to junior team members, guiding their development in generative AI through work reviews, skill-building, and knowledge sharing. Documentation and Reporting: Document research findings, model architectures, methodologies, and experimental results thoroughly. Prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Continuous Learning and Innovation: Stay abreast of the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities. Foster a culture of learning and innovation within the team to drive continuous improvement. Mandatory technical & functional skills Strong programming skills in Python and frameworks like PyTorch or TensorFlow. In depth knowledge on Deep Learning - CNN, RNN, LSTM, Transformers LLMs ( BERT, GEPT, etc.) and NLP algorithms. Also, familiarity with frameworks like Langgraph/CrewAI/Autogen to develop, deploy and evaluate AI agents. Ability to test and deploy open source LLMs from Huggingface, Meta- LLaMA 3.1, BLOOM, Mistral AI etc. Ensure scalability and efficiency, handle data tasks, stay current with AI trends, and contribute to model documentation for internal and external audiences. Cloud computing experience, particularly with Google/Azure Cloud Platform, is essential. With strong foundation in understating Data Analytics Services offered by Google or Azure ( BigQuery/Synapse) Hands-on ML platforms offered through GCP : Vertex AI or Azure : AI Foundry or AWS SageMaker Large scale deployment of GenAI/DL/ML projects, with good understanding of MLOps /LLM Ops Preferred Technical & Functional Skills Strong oral and written communication skills with the ability to communicate technical and non-technical concepts to peers and stakeholders Ability to work independently with minimal supervision, and escalate when needed Key behavioral attributes/requirements Ability to mentor junior developers Ability to own project deliverables, not just individual tasks Understand business objectives and functions to support data needs #KGS QUALIFICATIONS This role is for you if you have the below Educational Qualifications PhD or equivalent degree in Computer Science/Applied Mathematics/Applied Statistics/Artificial Intelligence Preferences to research scholars from IITs, NITs and IIITs ( Research Scholars who are submitted their thesis) Work Experience 8 to 11 Years of experience with strong record of publications in top tier conferences and journals
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Computer Science, Electronics or Electrical Engineering, or equivalent practical experience. 2 years of experience in C++ and data structures and algorithms. 2 years of development experience in C++. Preferred qualifications: Master's degree or PhD in Electrical/Electronics Engineering, Computer Engineering, Computer Science, or a related field. Experience with Compilers. Experience in power and performance optimizations. Understanding of hardware, especially hardware that provides a high degree of parallelism. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. We are the team that builds Google Tensor, Google’s custom System-on-Chip (SoC) that powers the latest Pixel phones. Tensor makes transformative user experiences possible with the help of Machine Learning (ML) running on Tensor TPU. Our team’s work enables Gemini Nano, our efficient AI model for on-device tasks to run on Pixel phones. Our goal is to productize the latest ML innovations and research by delivering computing hardware and software. In this role, you will work on developing ML compilers for the Tensor TPU to accelerate Generative AI and other machine learning models running on custom hardware accelerators. You will also manage project priorities, deadlines, and deliverables. Google's mission is to organize the world's information and make it universally accessible and useful. Our team combines the best of Google AI, Software, and Hardware to create radically helpful experiences. We research, design, and develop new technologies and hardware to make computing faster, seamless, and more powerful. We aim to make people's lives better through technology. Responsibilities Build compilers and tools that efficiently map ML models with a particular focus on computing use cases to the hardware Instruction Set Architecture (ISA). Evaluate various trade-offs of different parallelization strategies such as performance, power, energy and memory consumption. Collaborate with machine learning researchers to constantly improve the domain specific compiler. Collaborate with hardware engineers to evolve future accelerators. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 1 month ago
0 years
10 - 15 Lacs
Mumbai Metropolitan Region
On-site
Role Overview We are seeking a Software Engineer who can contribute across the lifecycle of quantitative trading strategies. This includes simulation, analysis, infrastructure, and live deployment. The ideal candidate is a strong problem solver with a solid foundation in data structures and algorithms, and a mindset geared toward continuous learning and adaptability. While we currently use specific technologies, we value engineers who can evaluate and implement the best tools and platforms based on the problem at hand. Responsibilities Design and implement scalable, accurate, and high-performance simulation frameworks to test and evaluate trading strategies. Collaborate with research analysts to support hypothesis-driven testing and provide meaningful insights through customized data outputs and visualizations. Optimize computational performance using appropriate technologies (e.g., parallelization, GPU computing, or distributed systems). Help build and maintain robust data pipelines for both historical and real-time data ingestion and processing. Support the development and management of real-time trading systems that ensure consistency between simulated and live environments. Assist in building monitoring tools and dashboards to track live trading performance and manage operational risk. Continuously explore and adopt new technologies, frameworks, and best practices to improve system capabilities. Requirements Strong proficiency in data structures, algorithms, and problem-solving techniques. Demonstrated experience with developing efficient, scalable systems for simulation, data processing, or real-time execution. Ability to abstract technical requirements and evaluate/implement suitable technologies accordingly. Familiarity with data storage concepts, APIs, and system design for both batch and real-time applications. Skilled in creating clear and effective visualizations for analysis and monitoring purposes. Solid understanding of software development principles, including testing, version control, documentation, and CI/CD pipelines. Excellent communication and teamwork skills, with the ability to work collaboratively across teams. Skills: real-time execution,data storage concepts,testing,data,data structures,documentation,ci/cd pipelines,problem-solving,teamwork,simulation frameworks,visualizations,algorithms,system design,data processing,communication,version control,apis,software development principles,trading
Posted 1 month ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
8.0 years
0 Lacs
Hyderābād
Remote
Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow. Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth. Our 6,000+ global team members are passionate about what we do. We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity. Bold and diverse thinkers are always welcome. Come join us! Learn more at dnb.com/careers. Develop, maintain, and analyze datasets from diverse sources, including mobile and web, government agencies, web crawls, social media, and proprietary datasets, to create insights for our clients, power our platform, and create an innovative market understanding. Create designs and share ideas for creating and improving data pipelines and tools. This role will support maintaining our existing data pipelines and building new pipelines for increased customer insights Key Responsibilities: Collaborate with cross-functional teams to identify and design requirements for advanced systems with respect to processing, analyzing, searching, visualizing, developing, and testing vast datasets to ensure data accuracy. Implement business requirements by collaborating with stakeholders. Become familiar with existing application code and achieve a complete understanding of how the applications function. Maintain data quality by writing validation tests. Understand variety of unique data sources. Create and document data documentation, including, processing systems and flow diagrams. Help maintain existing systems, including troubleshooting and resolving alerts. Expected to meet critical project deadlines. Excellent organizational, analytical, and decision-making skills. Excellent verbal, written, and interpersonal communication skills. Capable of working collaboratively and independently. Share ideas across teams to spread awareness and use of frameworks and tooling. Show an ownership mindset in everything you do; be a problem solver, be curious and be inspired to take action, be proactive, seek ways to collaborate and connect with people and teams in support of driving success. Continuous growth mindset, keep learning through social experiences and relationships with stakeholders, experts, colleagues and mentors as well as widen and broaden your competencies through structural courses and programs. Key Skills: 8+ years experience in a data analysis, visualization, and manipulation. Extensive experience working with GCP services, including Big Query, Dataflow, Pub/Sub, Cloud Storage, Cloud Run, Cloud Functions and related technologies. Extensive experience with SQL and relational databases, including optimization and design. Experience with Amazon Web Services (EC2, RDS, S3, Redshift, EMR, and more). Experience with OS level scripting (bash, sed, awk, grep, etc.). Experience in AdTech, web cookies, and online advertising technologies is a plus. Testable and efficient Python coding for data processing and analysis. Familiarity with parallelization of applications on a single machine and across a network of machines. Expertise in containerized infrastructure and CI/CD systems, including CloudBuild, Docker, Kubernetes, Harness, and GitHub Actions. Experience with version control tools such as GIT, Github, and BitBucket. Experience with Agile Project Management tools such as Jira and Confluence. Experience with object-oriented programming, functional programming a plus. Analytic tools and ETL/ELT/data pipeline frameworks a plus. Experience with data visualization tools like Looker, Tableau, or Power BI. Experience working with global remote teams. Knowledge of data transformation processes. Google Cloud certification a plus. Proficiency in Microsoft Office Suite. Fluency in English and languages relevant to the team. This position is internal titled as Senior Software Engineer All Dun & Bradstreet job postings can be found at https://www.dnb.com/about-us/careers-and-people/joblistings.html and https://jobs.lever.co/dnb. Official communication from Dun & Bradstreet will come from an email address ending in @dnb.com. Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever. Your use of this page is subject to Lever's Privacy Notice and Cookie Policy, which governs the processing of visitor data on this platform.
Posted 1 month ago
6.0 years
6 - 10 Lacs
Mumbai Metropolitan Region
On-site
Skills: Node.js, JavaScript, php, HTML, Cascading Style Sheets (CSS), MySQL, Full Stack Developer Mumbai Job Description We are looking for a highly skilled computer programmer who is comfortable with both front and back end programming. Full stack developers are responsible for developing and designing front end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. Full stack developers will be required to see out a project from conception to final product, requiring good organizational skills and attention to detail. Full Stack Developer Responsibilities Developing front end website architecture. Designing user interactions on web pages. Developing back-end website applications. Creating servers and databases for functionality. Ensuring responsiveness of applications. Working alongside graphic designers for web design features. Seeing through a project from conception to finished product. Designing and developing APIs. Meeting both technical and consumer needs. Staying abreast of developments in web applications and programming languages. Full Stack Developer Requirements Engineering Degree in computer science. Proficiency writing server-side Node js components in Express framework using features such as promises, async/await etc; with emphasis on following: Performance considerations (parallelization with async, minimizing HTTP requests, minimizing DOM interaction, building CSS / JS front end files) Security Knowledge (CSRF, SQL Injection prevention, JS injection prevention) Proficiency with fundamental front-end languages such as HTML, CSS, and JavaScript. Familiarity with JavaScript frameworks such as React. Familiarity with languages such as PHP. Familiarity with database technology such as MySQL, Redis, and MongoDB. Familiarity with message queuing / streaming architectures such as Kafka, RabbitMQ Excellent verbal communication skills. Good problem-solving skills. Attention to detail. Overall experience of 6+ years with at least 3+ years writing Node.js applications
Posted 1 month ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Amazon’s eCommerce Foundation (eCF) organization is responsible for the core components that drive the Amazon website and customer experience. Serving millions of customer page views and orders per day, eCF builds for scale. As an organization within eCF, the Business Data Technologies (BDT) group is no exception. We collect petabytes of data from thousands of data sources inside and outside Amazon including the Amazon catalog system, inventory system, customer order system, page views on the website. We provide interfaces for our internal customers to access and query the data hundreds of thousands of times per day, using Amazon Web Service’s (AWS) Redshift, Hive, Spark. We build scalable solutions that grow with the Amazon business. BDT team is building an enterprise-wide Big Data Marketplace leveraging AWS technologies. We work closely with AWS teams like EMR/Spark, Redshift, Athena, S3 and others. We are developing innovative products including the next-generation of data catalog, data discovery engine, data transformation platform, and more with state-of-the-art user experience. We’re looking for top engineers to build them from the ground up. This is a hands-on position where you will do everything from designing & building extremely scalable components to formulating strategy and direction for Big Data at Amazon. You will also mentor junior engineers and work with the most sophisticated customers in the business to help them get the best results. You need to not only be a top software developer with excellent programming skills, have an understanding of big data and parallelization, and a stellar record of delivery, but also excel at leadership, customer obsession and have a real passion for massive-scale computing. Come help us build for the future of Data! Key job responsibilities An SDE-II in the Datashield team would lead product and tech initiatives within the team and beyond by partnering with internal and external stakeholders and teams. They would need to come up with technical strategies and design for complex customer problems by leveraging out of box solutions to enable faster roll outs. They will deliver working software systems consisting of multiple features spanning the full software lifecycle including design, implementation, testing, deployment, and maintenance strategy. The problems they need to solve do not start with a defined technology strategy, and may have conflicting constraints. As technology lead in the team, they will review other SDEs’ work to ensure it fits into the bigger picture and is well designed, extensible, performant, and secure. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Bachelor's degree in computer science or equivalent Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience 1+ years of building large-scale machine-learning infrastructure for online recommendation, ads ranking, personalization or search experience Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2952490 Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skills: Node.js, JavaScript, php, HTML, Cascading Style Sheets (CSS), MySQL, Full Stack Developer Mumbai Job Description We are looking for a highly skilled computer programmer who is comfortable with both front and back end programming. Full stack developers are responsible for developing and designing front end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. Full stack developers will be required to see out a project from conception to final product, requiring good organizational skills and attention to detail. Full Stack Developer Responsibilities Developing front end website architecture. Designing user interactions on web pages. Developing back-end website applications. Creating servers and databases for functionality. Ensuring responsiveness of applications. Working alongside graphic designers for web design features. Seeing through a project from conception to finished product. Designing and developing APIs. Meeting both technical and consumer needs. Staying abreast of developments in web applications and programming languages. Full Stack Developer Requirements Engineering Degree in computer science. Proficiency writing server-side Node js components in Express framework using features such as promises, async/await etc; with emphasis on following: Performance considerations (parallelization with async, minimizing HTTP requests, minimizing DOM interaction, building CSS / JS front end files) Security Knowledge (CSRF, SQL Injection prevention, JS injection prevention) Proficiency with fundamental front-end languages such as HTML, CSS, and JavaScript. Familiarity with JavaScript frameworks such as React. Familiarity with languages such as PHP. Familiarity with database technology such as MySQL, Redis, and MongoDB. Familiarity with message queuing / streaming architectures such as Kafka, RabbitMQ Excellent verbal communication skills. Good problem-solving skills. Attention to detail. Overall experience of 6+ years with at least 3+ years writing Node.js applications Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough