Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity Our engineering team develops the Adobe Experience Platform, offering innovative data management and analytics. Developing a reliable, resilient system at large scale is crucial. We use Big Data and open-source tech for Adobe's services. Our support for large enterprise products spans across geographies, requiring us to manage disparate data sources and ingestion mechanisms. The data must be easily accessible at very low latency to support various scenarios and use cases. We seek candidates with deep expertise in building low latency services at high scales who can lead us in accomplishing our vision. What you will need to succeed 8+ years in design and development of data-driven large distributed systems 3+ years as an architect building large-scale data-intensive distributed systems and services Relevant experience building application layers on top of Apache Spark Strong experience with Hive SQL and Presto DB Experience leading architecture designs to approval while collaborating with multiple collaborators, dependencies, and internal/external customer requirements In-depth work experience with open-source technologies like Apache Kafka, Apache Spark, Kubernetes, etc. Experience with big data technologies on public clouds such as Azure, AWS, or Google Cloud Platform Experience with in-memory distributed caches like Redis, Memcached, etc. Strong coding (design patterns) and design proficiencies setting examples for others; contributions to open source are highly desirable Proficiency in data structures and algorithms Cost consciousness around computation and memory requirements Strong verbal and written communication skills BTech/MTech/MS in Computer Science What you'll do Lead the technical design and implementation strategy for major systems and components of the Adobe Experience Platform Evaluate and drive the architecture and technology choices for major systems/components Design, build, and deploy products with outstanding quality Innovate the current system to improve robustness, ease, and convenience Articulate design and code choices to cross-functional teams Mentor and guide a high-performing team Review and provide feedback on features, technology, architecture, design, time & budget estimates, and test strategies Engage in creative problem-solving Develop and evolve engineering standard methodologies to improve the team’s efficiency Partner with other teams across Adobe to achieve common goals Discover what makes Adobe a great place to work: Life @ Adobe Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
The Applications Development Supervisor role is an intermediate management position where you will lead and direct a team of employees to establish and implement new or revised application systems and programs in coordination with the Technology team. Your main objective will be to oversee applications systems analysis and programming activities. Your responsibilities will include managing an Applications Development team, recommending new work procedures for process efficiencies, resolving issues by identifying solutions based on technical experience, developing comprehensive knowledge of how your area integrates within apps development, ensuring quality of tasks provided by the team, acting as a backup to Applications Development Manager, and serving as an advisor to junior developers and analysts. You will also need to appropriately assess risk in business decisions, safeguarding Citigroup's reputation and assets by driving compliance with laws and regulations, adhering to policy, applying ethical judgment, and effectively supervising the activity of others. To qualify for this role, you should have 2-4 years of relevant experience, proficiency in Big Data, Spark, Hive, Hadoop, Python, Java, experience in managing and implementing successful projects, ability to make technical decisions on software development projects, knowledge of dependency management, change management, continuous integration testing tools, audit/compliance requirements, software engineering, and object-oriented design. Demonstrated leadership, management skills, and clear communication are essential. A Bachelors degree or equivalent experience is required for this position. Please note that this job description provides an overview of the work performed, and other job-related duties may be assigned as necessary. If you require a reasonable accommodation due to a disability to use our search tools or apply for a career opportunity, please review Accessibility at Citi. You can also view Citis EEO Policy Statement and the Know Your Rights poster.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our team as a Senior Data Scientist with expertise in Artificial Intelligence (AI) and Machine Learning (ML). The ideal candidate should possess a minimum of 5-7 years of experience in data science, focusing on AI/ML applications. You are expected to have a strong background in various ML algorithms, programming languages such as Python, R, or Scala, and data processing frameworks like Apache Spark. Proficiency in data visualization tools and experience in model deployment using Docker, Kubernetes, and cloud services will be essential for this role. Your responsibilities will include end-to-end AI/ML project delivery, from data processing to model deployment. You should have a good understanding of statistics, probability, and mathematical concepts used in AI/ML. Additionally, familiarity with big data tools, natural language processing techniques, time-series analysis, and MLOps will be advantageous. As a Senior Data Scientist, you are expected to lead cross-functional project teams and manage data science projects in a production setting. Your problem-solving skills, communication skills, and curiosity to stay updated with the latest advancements in AI and ML are crucial for success in this role. You should be able to convey technical insights clearly to diverse audiences and quickly adapt to new technologies. If you are an innovative, analytical, and collaborative team player with a proven track record in AI/ML project delivery, we invite you to apply for this exciting opportunity.,
Posted 3 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! About Connect Adobe Connect, within Adobe DALP BU is one of the best online webinar and training delivery platform. The product has a huge customer base which has been using it for many years. The product has evolved magnificently over a period of time ensuring it stay on top of the latest tech stack. It offers opportunity to look at plethora of technologies on both client and server side. What You’ll Do: Hands-on Machine Learning Engineer who will release models in production. Develop classifiers, predictive models, and multi-variate optimization algorithms on large-scale datasets using advanced statistical modeling, machine learning, and data mining. Special focus on R&D that will be building predictive models for conversion optimization, Bidding algorithms for pacing & optimization, Reinforcement learning problems, and Forecasting. Collaborate with Product Management to bring AI-based Assistive experiences to life. Socialize what’s possible now or in the near future to inform the roadmap. Responsible for driving all aspects of ML product development: ML modeling, data/ML pipelines, quality evaluations, productization, and ML Ops. Create and instill a team culture that focuses on sound scientific processes and encourages deep engagement with our customers. Handle project scope and risks with data, analytics, and creative problem-solving. What you require: Solid foundation in machine learning, classifiers, statistical modeling and multivariate optimization techniques Experience with control systems, reinforcement learning problems, and contextual bandit algos. Experience with DNN frameworks like TensorFlow or PyTorch on large-scale data sets. TensorFlow, R, scikit, pandas Proficient in one or more: Python, Java/Scala, SQL, Hive, Spark Good to have - Git, Docker, Kubernetes GenAI, RAG pipelines a must have technology Cloud based solutions is good to have General understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts Ability to be a self-starter and work closely with other data scientists and software engineers to design, test, and build production-ready ML and optimization models and distributed algorithms running on large-scale data sets. Ideal Candidate Profile: A total of 10+ years of experience, including at least 5 years in technical roles involving Data Science, Machine Learning, or Statistics. Masters or B.Tech in Computer Science/ Statistics Comfort with ambiguity, adaptability to evolving priorities, and the ability to lead a team while working autonomously. Proven management experience with highly diverse and global teams. Demonstrated ability to influence technical and non-technical stakeholders. Proven ability to effectively manage in a high-growth, matrixed organization. Track record of delivering cloud-scale, data-driven products, and services that are widely adopted with large customer bases. An ability to think strategically, look around corners, and create a vision for the current quarter, the year, and five years down the road. A relentless pursuit of great customer experiences and continuous improvements to the product. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Data Analyst at our organization, you will play a crucial role in analyzing data to identify trends, patterns, and insights that inform business decisions. Your responsibilities will include setting up robust automated dashboards for performance management, developing and maintaining databases, and preparing reports for management that highlight trends, patterns, and predictions based on relevant data. To succeed in this role, you should possess strong problem-solving skills, advanced analytical skills with expertise in Excel, SQL, and Hive, and experience in handling large-scale datasets efficiently. Additionally, you should have excellent communication and project management abilities, along with the capability to interact with and influence business stakeholders effectively. Experience with web analytics platforms is a plus, and the ideal candidate will have a professional experience range of 3-6 years. Joining our team will provide you with the opportunity to be part of the largest fintech lending play in India. You will work in a fun, energetic, and once-in-a-lifetime environment that fosters your career growth and enables you to achieve your best possible outcome. With over 500 million registered users and 21 million merchants in our ecosystem, we are uniquely positioned to democratize credit for deserving consumers and merchants. You will be part of India's largest digital lending story and contribute to our commitment to this mission. Seize this opportunity to be a key player in our story!,
Posted 3 days ago
5.0 years
0 Lacs
India
Remote
Where you’ll work: India (Remote) Engineering at GoTo We’re the trailblazers of remote work technology. We build powerful, flexible work software that empowers everyone to live their best life, at work and beyond. And blaze even more trails along the way. There’s ample room for growth – so you can blaze your own trail here too. When you join a GoTo product team, you’ll take on a key role in this process and see your work be used by millions of users worldwide. Your Day to Day As a Senior Data Engineer, you would be: Design and Develop Pipelines : Build robust, scalable, and efficient ETL/ELT data pipelines to process structured data from diverse sources. Big Data Processing : Develop and optimize large-scale data workflows using Apache Spark, with strong hands-on experience in building ETL pipelines. Cloud-Native Data Solutions : Architect and implement data solutions using AWS services such as S3, EMR, Lambda, and EKS. Data Governance : Manage and govern data using catalogs like Hive or Unity Catalog; ensure strong data lineage, access controls, and metadata management. Workflow Orchestration : Schedule, monitor, and orchestrate workflows using Apache Airflow or similar tools. Data Quality & Monitoring : Implement quality checks, logging, monitoring, and alerting to ensure pipeline reliability and visibility. Cross-Functional Collaboration : Partner with analysts, data scientists, and business stakeholders to deliver high-quality data for applications and enable self-service BI. Compliance & Security : Uphold best practices in data governance, security, and compliance across the data ecosystem. Mentorship & Standards : Mentor junior engineers and help evolve engineering practices including CI/CD, testing, and documentation. What We’re Looking For As a Senior Data Engineer, your background will look like: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering or software development, with a proven record of maintaining production-grade pipelines. Proficient in Python and SQL for data transformation and analytics. Strong expertise in Apache Spark , including data lake management, ACID transactions, schema enforcement/evolution, and time travel. In-depth knowledge of AWS services —especially S3, EMR, Lambda, and EKS—with a solid grasp of cloud architecture and security best practices. Solid data modeling skills (dimensional, normalized) and an understanding of data warehousing and lakehouse paradigms. Experience with BI tools like Tableau or Power BI . Familiar with setting up data quality , monitoring, and observability frameworks. Excellent communication and collaboration skills, with the ability to thrive in an agile and multicultural team environment. Nice to Have Experience working on the Databricks Platform Knowledge of Delta or Apache Iceberg file formats Passion for Machine Learning and AI; enthusiasm to explore and apply intelligent systems. What We Offer At GoTo, we believe in supporting our employees with a comprehensive range of benefits designed to fit your life—at work and beyond. Here are just some of the benefits and perks you can expect when you join our team: Comprehensive health benefits, life and disability insurance, and fertility and family-forming support program Generous paid time off, paid holidays, volunteer time off, and quarterly self-care days and no meeting days Tuition and reading reimbursement programs to support your continuous learning and professional growth Thrive Global Wellness Program, confidential Employee Assistance Program (EAP), as well as One to One Wellness Coaching Employee programs—including Employee Resource Groups (ERGs), GoTo Gives, and our charitable matching program—to amplify your connection and impact Registered Retirement Savings Plan (RRSP) to help you plan for your future GoTo performance bonus program to celebrate your impact and contributions Monthly remote work stipend to support your home office expenses At GoTo, you’ll find the flexibility, resources, and support you need to thrive—at work, at home, and everywhere in between. You’ll work towards a shared goal with an open-minded, cohesive team that’s greater than the sum of its parts. We’re committed to creating an inclusive space for everyone, because we know unique perspectives make us a stronger company and community. Join us and be part of a company that invests in your future, where together we’ll Be Real, Think Big, Move Fast, Keep Growing, and stay Customer Obsessed .Learn more.
Posted 3 days ago
2.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,
Posted 3 days ago
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake
Posted 3 days ago
9.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: Python for scripting and ETL orchestration SQL for complex data transformation and performance tuning in Snowflake Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python
Posted 3 days ago
0.0 years
0 Lacs
Varthur, Bengaluru, Karnataka
On-site
Outer Ring Road, Devarabisanahalli Vlg Varthur Hobli, Bldg 2A, Twr 3, Phs 1, BANGALORE, IN, 560103 INFORMATION TECHNOLOGY 4230 Band B Satyanarayana Ambati Job Description Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 3 days ago
0.0 - 5.0 years
0 Lacs
Pune, Maharashtra
Remote
R022242 Pune, Maharashtra, India Engineering Regular Location Details: Pune, India This is a hybrid position. You’ll divide your time between working remotely from your home and an office, so you should live within commuting distance. Hybrid teams may work in-office as much as a few times a week or as little as once a month or quarter, as decided by leadership. The hiring manager can share more about what hybrid work might look like for this team Join our Team Are you excited about building world-class software solutions that empower millions of customers globally? At GoDaddy, our engineers are at the forefront of developing innovative platforms that drive our core domain businesses. We are seeking a skilled Senior Machine Learning Scientist to join our Domain Search team, where you will design, build, and maintain the foundational systems and services powering GoDaddy’s search, ML, and GenAI platforms. In this role, you will develop and apply machine learning and LLM-based methods to improve our customers’ search experience and play a major part in improving the search page across all markets we serve. Whether you’re passionate about crafting highly scalable systems or developing seamless customer experiences, your work will be critical to ensuring performance, scalability, and reliability for our customers worldwide. Join us and help craft the future of software at GoDaddy! What you'll get to do... Work with the latest deep learning and search technologies to develop and optimize advanced machine learning models to improve our customers’ experience Be self-driven, understand the data we have, and provide data-driven insights to all of our challenges Mine datasets to develop features and models to improve search relevance and ranking algorithms Design and analyze experiments to test new product ideas Understand patterns and insights about what our users search for and purchase to help personalize our recommendations Your experience should include... 5 years of industry experience in deep learning and software development Skilled in machine learning, statistics, and natural language processing (NLP) Proficient with deep learning frameworks such as PyTorch and handling large datasets Experienced in programming languages like Python, Java, or similar Familiar with large-scale data analytics using Spark You might also have... Ph.D. in a related field preferred Experience with Amazon AWS, containerized solutions, and both SQL and NoSQL databases Strong understanding of software security standard processes Experience with Hadoop technologies such as Spark, Hive, and other big data tools; data analytics and machine learning experience are a plus Experience with Elastic Search and search technologies is a plus, with a passion for developing innovative solutions for real-world business problems We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies
Posted 3 days ago
2.0 years
0 Lacs
Pune, Maharashtra
Remote
R022243 Pune, Maharashtra, India Engineering Regular Location Details: Pune, India This is a hybrid position. You’ll divide your time between working remotely from your home and an office, so you should live within commuting distance. Hybrid teams may work in-office as much as a few times a week or as little as once a month or quarter, as decided by leadership. The hiring manager can share more about what hybrid work might look like for this team Join our Team Are you excited about building world-class software solutions that power millions of customers globally? At GoDaddy, our engineers are at the forefront of crafting innovative platforms that drive our core domain businesses, and we’re looking for dedicated professionals to help us craft the future of software. Whether you’re passionate about developing highly scalable systems, seamless customer experiences, or advanced machine learning and LLM-based methods to improve the search experience, we have a place for you! As part of our Domain Search, Registrars, and Investors teams, you’ll work on impactful products like our domain name search engine, registration and management services, high-scale DNS, investor experience, and personalization through ML models. You’ll play a key role in improving the search page for customers worldwide, owning the design, code, and data quality of your products end-to-end. We value strong software engineers with experience in microservices, cloud computing, distributed systems, data processing, and customer focus—and we’re flexible regarding your technology background. Join a small, high-impact team of dedicated engineers as we build and iterate upon the world’s largest domain name registrar services and secondary marketplace What you'll get to do... Develop and maintain scalable, cloud-ready applications and APIs, contributing across the full technology stack, including persistence and service layers Leverage data analytics and ETL processes to transform, enrich, and improve product and customer experience in both batch and streaming scenarios Ensure high code quality through unit/integration testing, code reviews, and consistency with standard methodologies Lead technical projects through architecture, design, and implementation phases, solving end-to-end problems Collaborate effectively with distributed teams Your experience should include... 2+ years of industrial experience with a strong background in deep learning and software development Skilled in machine learning, statistics, and natural language processing (NLP) Hands-on experience with deep learning frameworks such as PyTorch and working with large datasets Proficient in programming languages such as Python or Java Familiar with large-scale data analytics using Spark You might also have... Experience with AWS and containerized solutions Proficient in both SQL and NoSQL databases Strong understanding of software security standard processes Experience with Hadoop technologies (e.g., Spark, Hive) and big data analytics; ML and search technologies (e.g., Elastic Search) are a plus We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies
Posted 3 days ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Description: Application Developer Bangalore, Karnataka, India AXA XL offers risk transfer and risk management solutions to clients globally. We offer worldwide capacity, flexible underwriting solutions, a wide variety of client-focused loss prevention services and a team-based account management approach. AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable – enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained advantage. What you’ll be DOING What will your essential responsibilities include? We are seeking an experienced ETL Developer to support and evolve our enterprise data integration workflows. The ideal candidate will have deep expertise in Informatica PowerCenter, strong hands-on experience with Azure Data Factory and Databricks, and a passion for building scalable, reliable ETL pipelines. This role is critical for both day-to-day operational reliability and long-term modernization of our data engineering stack in the Azure cloud. Key Responsibilities: Maintain, monitor, and troubleshoot existing Informatica PowerCenter ETL workflows to ensure operational reliability and data accuracy. Enhance and extend ETL processes to support new data sources, updated business logic, and scalability improvements. Develop and orchestrate PySpark notebooks in Azure Databricks for data transformation, cleansing, and enrichment. Configure and manage Databricks clusters for performance optimization and cost efficiency. Implement Delta Lake solutions that support ACID compliance, versioning, and time travel for reliable data lake operations. Automate data workflows using Databricks Jobs and Azure Data Factory (ADF) pipelines. Design and manage scalable ADF pipelines, including parameterized workflows and reusable integration patterns. Integrate with Azure Blob Storage and ADLS Gen2 using Spark APIs for high-performance data ingestion and output. Ensure data quality, consistency, and governance across legacy and cloud-based pipelines. Collaborate with data analysts, engineers, and business teams to deliver clean, validated data for reporting and analytics. Participate in the full Software Development Life Cycle (SDLC) from design through deployment, with an emphasis on maintainability and audit readiness. Develop maintainable and efficient ETL logic and scripts following best practices in security and performance. Troubleshoot pipeline issues across data infrastructure layers, identifying and resolving root causes to maintain reliability. Create and maintain clear documentation of technical designs, workflows, and data processing logic for long-term maintainability and knowledge sharing. Stay informed on emerging cloud and data engineering technologies to recommend improvements and drive innovation. Follow internal controls, audit protocols, and secure data handling procedures to support compliance and operational standards. Provide accurate time and effort estimates for assigned development tasks, accounting for complexity and risk. What you will BRING We’re looking for someone who has these abilities and skills: Advanced experience with Informatica PowerCenter, including mappings, workflows, session tuning, and parameterization Expertise in Azure Databricks + PySpark, including: Notebook development Cluster configuration and tuning Delta Lake (ACID, versioning, time travel) Job orchestration via Databricks Jobs or ADF Integration with Azure Blob Storage and ADLS Gen2 using Spark APIs Strong hands-on experience with Azure Data Factory: Building and managing pipelines Parameterization and dynamic datasets Notebook integration and pipeline monitoring Proficiency in SQL, PL/SQL, and scripting languages such as Python, Bash, or PowerShell Strong understanding of data warehousing, dimensional modeling, and data profiling Familiarity with Git, CI/CD pipelines, and modern DevOps practices Working knowledge of data governance, audit trails, metadata management, and compliance standards such as HIPAA and GDPR Effective problem-solving and troubleshooting skills with the ability to resolve performance bottlenecks and job failures Awareness of Azure Functions, App Services, API Management, and Application Insights Understanding of Azure Key Vault for secrets and credential management Familiarity with Spark-based big data ecosystems (e.g., Hive, Kafka) is a plus Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars: Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society – are essential to our future. We’re committed to protecting and restoring nature – from mangrove forests to the bees in our backyard – by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action : We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day – the Global Day of Giving. For more information, please see axaxl.com/sustainability.
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a member of the Enterprise Data Platform (EDP) team at Macquarie, you will play a crucial role in managing Macquarie's Corporate Data Platform. The businesses supported by the platform rely heavily on it for various use cases such as data science, self-service analytics, operational analytics, and reporting. At Macquarie, we believe in leveraging the strengths of diverse individuals and empowering them to explore endless possibilities. With a global presence in 31 markets and a history of 56 years of unbroken profitability, you will join a collaborative and supportive team where every member contributes ideas and drives positive outcomes. In this role, your responsibilities will include delivering new platform capabilities utilizing AWS and Kubernetes to enhance resilience and capabilities that redefine how the business leverages the platform. You will be involved in deploying tools, introducing new technologies, and automating processes to enhance efficiency. Additionally, you will focus on improving CI/CD pipelines, supporting platform applications, and ensuring smooth operations. To be successful in this role, you should have at least 3 years of experience in Cloud, DevOps, or Data Engineering with hands-on proficiency in AWS and Kubernetes. You should also possess expertise in Big Data technologies like Hive, Spark, and Presto, along with strong scripting skills in Python and Bash. A background in DevOps, Agile, Scrum, and Continuous Delivery environments is essential, along with excellent communication skills to collaborate effectively with cross-functional teams. Your passion for problem-solving, continuous learning, and keen interest in Big Data and Cloud technologies will be invaluable in this role. At Macquarie, we value individuals who are enthusiastic about building a better future with us. If you are excited about this opportunity and working at Macquarie, we encourage you to apply. As part of Macquarie, you will have access to a wide range of benefits such as wellbeing leave, paid parental leave, company-subsidized childcare services, volunteer leave, comprehensive medical and life insurance cover, employee assistance programs, learning and development opportunities, and flexible working arrangements. Technology plays a critical role at Macquarie, enabling every aspect of our operations and driving innovation in connecting people and data, building platforms, and designing future technology solutions. Our commitment to diversity, equity, and inclusion is unwavering, and we aim to provide reasonable adjustments to support individuals who may require assistance during the recruitment process and in their working arrangements. If you need additional support, please inform us during the application process.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are seeking experienced and talented engineers to join our team. Your main responsibilities will include designing, building, and maintaining the software that drives the global logistics industry. WiseTech Global is a leading provider of software for the logistics sector, facilitating connectivity for major companies like DHL and FedEx within their supply chains. Our organization is product and engineer-focused, with a strong commitment to enhancing the functionality and quality of our software through continuous innovation. Our primary Research and Development center in Bangalore plays a pivotal role in our growth strategies and product development roadmap. As a Lead Software Engineer, you will serve as a mentor, a leader, and an expert in your field. You should be adept at effective communication with senior management while also being hands-on with the code to deliver effective solutions. The technical environment you will work in includes technologies such as C#, Java, C++, Python, Scala, Spring, Spring Boot, Apache Spark, Hadoop, Hive, Delta Lake, Kafka, Debezium, GKE (Kubernetes Engine), Composer (Airflow), DataProc, DataStreams, DataFlow, MySQL RDBMS, MongoDB NoSQL (Atlas), UIPath, Helm, Flyway, Sterling, EDI, Redis, Elastic Search, Grafana Dashboard, and Docker. Before applying, please note that WiseTech Global may engage external service providers to assess applications. By submitting your application and personal information, you agree to WiseTech Global sharing this data with external service providers who will handle it confidentially in compliance with privacy and data protection laws.,
Posted 3 days ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
The ideal candidate for this position should possess a Bachelor's/Master's/PhD degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or a related technical field. You should have a minimum of 7 years of relevant work experience in a similar role, particularly as a data scientist or Statistician, developing predictive analytics solutions for various business challenges. Your functional competencies should include advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. You must have a strong programming background and expertise in building models using languages such as SAS, Python, or R. Additionally, you should excel in storytelling and articulation, with the ability to translate analytical results into clear, concise, and persuasive insights for both technical and non-technical audiences. Experience in working with large datasets, both structured and unstructured, is crucial, along with the capability to comprehend business problems and devise optimal data strategies. Your responsibilities will encompass providing solutions for tasks like Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, and Price Optimization. It would be advantageous if you have experience working with big data platforms such as Hadoop, Hive, HBase, Spark, etc. The client you will be working with is a rapidly growing VC-backed on-demand startup that aims to revolutionize the food delivery industry. The team values talent, ambition, smartness, passion, versatility, focus, hyper-productivity, and creativity. The primary focus is on ensuring exceptional customer experience through superfast deliveries facilitated by a smartphone-equipped delivery fleet and custom-built routing algorithms. The company operates in eight cities across India and has secured substantial funding to support its expansion. If you meet the qualifications and are excited about the opportunity to contribute to this innovative venture, please share your updated profile at poc@mquestpro.com.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 5+ years of experience with expertise in Data Engineering. Your hands-on experience should include design and development of big data platforms. The ideal candidate will have deep understanding of modern data processing technology stacks such as Spark, HBase, Hive, and other Hadoop ecosystem technologies, with a focus on development using Scala. Additionally, you should possess deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices, is required. Understanding of the theory and application of Continuous Integration/Delivery is a plus. Familiarity with NoSQL technologies, including column family, graph, document, and key-value data storage technologies, is desirable. A passion for software craftsmanship is essential for this role. Experience in the Financial Industry would be beneficial.,
Posted 4 days ago
0.0 - 31.0 years
1 - 2 Lacs
Sector 99, Gurgaon/Gurugram
On-site
We are Hiring at *WOW! Momo* 🥟🍗🍜🍦 Company Description: Wow! Momo is a dynamic brand that started in Kolkata and has grown to become the largest chain of Momo in the country. It offers a variety of Momo in different formats and flavours and has expanded to various cities in India. Momo is known for its innovative creations, such as Sizzler Momo, Momo Burgers, and Tandoori Momo. It has successfully established itself in the Quick-Service Restaurant Industry and aims to expand internationally soon. 🏢Position Title: Team Member / Shift Manager 📍Location: Satya The Hive, Dwarka Expy, Sector 102, Gurugram, Haryana 122505 🎓Qualification: 10th Pass 💸Salary Range: 12,200 TO 13500 per month (Fixed) + 1500 incentive + PF / ESIC / Insurance (17000 to 18000 CTC for shift manager) - NO CHARGES WILL BE TAKEN FROM THE CANDIDATE Other benefits (Two times food) - Promotion in 3 months. Immediate Joiners Preferred If you're passionate about food and customer service, we’d love to hear from you. Job Description: Maintain a fast speed of service, especially during all times. Take orders from customers and input their selections into the restaurant’s computer systems. Assemble orders on trays or in bags depending on the type of order. Process food large orders for events. Count & verify your till at the end of each shift and deposit money in the safe. Clean your station thoroughly before, during and after each shift. Respond to guest questions, concerns and complaints and make sure they leave satisfied. Follow all restaurant safety and security procedures. Arrive on time for all shifts and stay until shift completion. Maintain cleaning & hygiene of the premises. Receiving and stocking all the materials as per SOP. Maintaining FIFO & inventory properly. Meet & Greet to each & every guest with smile. Requirements: ✅ Go Getter Attitude ✅ Prior experience in the Quick Service Restaurant (QSR) industry If you're passionate about the QSR industry and meet the above criteria, we’d love to hear from you! References are also welcomed! Join the Wow Momo family and take your career to the next level with a fast-growing brand!
Posted 4 days ago
10.0 - 31.0 years
4 - 6 Lacs
Salt Lake City, Kolkata/Calcutta
On-site
Data scientist roles and responsibilities include: Data mining or extracting usable data from valuable data sources Using machine learning tools to select features, create and optimize classifiers Carrying out the preprocessing of structured and unstructured data Enhancing data collection procedures to include all relevant information for developing analytic systems Processing, cleansing, and validating the integrity of data to be used for analysis Analyzing large amounts of information to find patterns and solutions Developing prediction systems and machine learning algorithms Presenting results in a clear manner Propose solutions and strategies to tackle business challenges Collaborate with Business and IT teams Data Scientist SkillsYou need to master the skills required for data scientist jobs in various industries and organizations if you want to pursue a data scientist career. Let’s look at the must-have data scientist qualifications. Key skills needed to become a data scientist: Programming Skills – knowledge of statistical programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators, etc. Proficiency in statistics is essential for data-driven companies. Machine Learning – good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM, Decision Forests. Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis of a lot of predictive performance or algorithm optimization techniques. Data Wrangling – proficiency in handling imperfections in data is an important aspect of a data scientist job description. Experience with Data Visualization Tools like matplotlib, ggplot, d3.js., Tableau that help to visually encode data Excellent Communication Skills – it is incredibly important to describe findings to a technical and non-technical audience. Strong Software Engineering Background Hands-on experience with data science tools Problem-solving aptitude Analytical mind and great business sense Degree in Computer Science, Engineering or relevant field is preferred Proven Experience as Data Analyst or Data Scientist
Posted 4 days ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for an experienced AI/ML Architect to spearhead the design, development, and deployment of cutting-edge AI and machine learning systems. As the ideal candidate, you should possess a strong technical background in Python and data science libraries, profound expertise in AI and ML algorithms, and hands-on experience in crafting scalable AI solutions. This role demands a blend of technical acumen, leadership skills, and innovative thinking to enhance our AI capabilities. Your responsibilities will include identifying, cleaning, and summarizing complex datasets from various sources, developing Python/PySpark scripts for data processing and transformation, and applying advanced machine learning techniques like Bayesian methods and deep learning algorithms. You will design and fine-tune machine learning models, build efficient data pipelines, and leverage distributed databases and frameworks for large-scale data processing. In addition, you will lead the design and architecture of AI systems, with a focus on Retrieval-Augmented Generation (RAG) techniques and large language models. Your qualifications should encompass 5-7 years of total experience with 2-3 years in AI/ML, proficiency in Python and data science libraries, hands-on experience with PySpark scripting and AWS services, strong knowledge of Bayesian methods and time series forecasting, and expertise in machine learning algorithms and deep learning frameworks. You should also have experience in structured, unstructured, and semi-structured data, advanced knowledge of distributed databases, and familiarity with RAG systems and large language models for AI outputs. Strong collaboration, leadership, and mentorship skills are essential. Preferred qualifications include experience with Spark MLlib, SciPy, StatsModels, SAS, and R, a proven track record in developing RAG systems, and the ability to innovate and apply the latest AI techniques to real-world business challenges. Join our team at TechAhead, a global digital transformation company known for AI-first product design thinking and bespoke development solutions. With over 14 years of experience and partnerships with Fortune 500 companies, we are committed to driving digital innovation and delivering excellence. At TechAhead, you will be part of a dynamic team that values continuous learning, growth, and crafting tailored solutions for our clients. Together, let's shape the future of digital innovation worldwide!,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Principal Analyst at Citi's Analytics and Information Management (AIM) team in Bangalore, India, you will play a crucial role in creating client-centric analytical solutions for various business challenges. With a focus on client obsession and stakeholder management, you will be responsible for owning and delivering complex analytical projects. Your expertise in business context understanding, data analysis, and project management will be essential in identifying trends, patterns, and presenting high-quality solutions to senior management. Your primary responsibilities will include developing business critical dashboards, assessing and optimizing marketing programs, sizing the impact of strategic changes, and streamlining existing processes. By leveraging your skills in SQL, Python, Pyspark, Hive, and Impala, you will work with large datasets to extract insights that drive revenue growth and business decisions. Additionally, your experience in Investment Analytics, Retail Analytics, Credit Cards, and Financial Services will be valuable in delivering actionable intelligence to business leaders. To excel in this role, you should possess a master's or bachelor's degree in Engineering, Technology, or Computer Science from premier institutes, along with 5-6 years of experience in delivering analytical solutions. Your ability to articulate and solve complex business problems, along with excellent communication and interpersonal skills, will be key in collaborating with cross-functional teams and stakeholders. Moreover, your hands-on experience in Tableau and project management skills will enable you to mentor and guide junior team members effectively. If you are passionate about data, eager to tackle new challenges, and thrive in a dynamic work environment, this position offers you the opportunity to contribute to Citi's mission of enabling growth and economic progress through innovative analytics solutions. Join us in driving business success and making a positive impact on the financial services industry. Citi is an equal opportunity and affirmative action employer, offering full-time employment in the field of Investment Analytics, Retail Analytics, Credit Cards, and Financial Services. If you are ready to take your analytics career to the next level, we invite you to apply and be part of our global community at Citi.,
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
BizViz is a company that offers a comprehensive view of a business's data, catering to various industries and meeting the diverse needs of business executives. With a dedicated team of over 50 professionals working on the BizViz platform for several years, the company aims to develop technological solutions that provide our clients with a competitive advantage. At BizViz, we are committed to the success of our customers, striving to create applications that align with their unique visions and requirements. We steer clear of generic ERP templates, offering businesses a more tailored solution. As a Big Data Engineer at BizViz, you will join a small, agile team of data engineers focused on building an innovative big data platform for enterprises dealing with critical data management and diverse application stakeholders at scale. The platform handles data ingestion, warehousing, and governance, allowing developers to create complex queries efficiently. With features like automatic scaling, elasticity, security, logging, and data provenance, our platform empowers developers to concentrate on algorithms rather than administrative tasks. We are seeking engineers who are eager for technical challenges, to enhance our current platform for existing clients and develop new capabilities for future customers. Key Responsibilities: - Work as a Senior Big Data Engineer within the Data Science Innovation team, collaborating closely with internal and external stakeholders throughout the development process. - Understand the needs of key stakeholders to enhance or create new solutions related to data and analytics. - Collaborate in a cross-functional, matrix organization, even in ambiguous situations. - Contribute to scalable solutions using large datasets alongside other data scientists. - Research innovative data solutions to address real market challenges. - Analyze data to provide fact-based recommendations for innovation projects. - Explore Big Data and other unstructured data sources to uncover new insights. - Partner with cross-functional teams to develop and execute business strategies. - Stay updated on advancements in data analytics, Big Data, predictive analytics, and technology. Qualifications: - BTech/MCA degree or higher. - Minimum 5 years of experience. - Proficiency in Java, Scala, Python. - Familiarity with Apache Spark, Hadoop, Hive, Spark SQL, Spark Streaming, Apache Kafka. - Knowledge of Predictive Algorithms, Mllib, Cassandra, RDMS (MYSQL, MS SQL, etc.), NOSQL, Columnar Databases, Big table. - Deep understanding of search engine technology, including Elasticsearch/Solr. - Experience in Agile development practices such as Scrum. - Strong problem-solving skills for designing algorithms related to data cleaning, mining, clustering, and pattern recognition. - Ability to work effectively in a matrix-driven organization under varying circumstances. - Desirable personal qualities: creativity, tenacity, curiosity, and a passion for technical excellence. Location: Bangalore To apply for this position, interested candidates can send their applications to careers@bdb.ai.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You are invited to join our team in Chennai as a Talend Developer on a contract basis for a duration of 3 months. Your primary responsibility will involve designing, developing, and implementing data integration solutions utilizing Talend Data Integration tools. This position is tailored for individuals who excel in a dynamic, project-oriented setting and possess a solid foundation in ETL development. Your key duties will include crafting and executing scalable ETL processes through Talend Open Studio/Enterprise. You will be tasked with merging data from various sources into target systems while ensuring data quality and coherence. Collaboration with Data Architects, Analysts, and fellow developers will be essential to grasp data requirements and transform them into technical remedies. Moreover, optimizing and fine-tuning ETL jobs for enhanced performance and reliability will be part of your routine tasks. It will also be your responsibility to create and uphold technical documentation related to ETL processes and data workflows, as well as troubleshoot and resolve ETL issues and production bugs. An ideal candidate for this role should possess a minimum of 3 years of hands-on experience with Talend Data Integration. Proficiency in ETL best practices, data modeling, and data warehousing concepts is expected. Additionally, a strong command of SQL and experience working with relational databases such as Oracle, MySQL, and PostgreSQL is essential. Knowledge of Big Data technologies like Hadoop, Spark, and Hive is advantageous, as is familiarity with cloud platforms like AWS, Azure, and GCP. Your problem-solving skills, ability to work independently, and excellent communication and teamwork abilities will be critical to your success in this role. This is a contractual/temporary position that requires your presence at the office for the duration of the 3-month contract term.,
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Key Responsibilities: The Senior Big Data Engineer will be working very closely with and managing the work of a team of data engineers working on our Big Data Platform. The tech lead will need below core skills: Work closely with the Olympus core & product processor teams and drive the build out & implementation of the CitiDigital reporting using Olympus framework Accountable for all phases of development process – analysis, design, construction, testing and implementation in agile development lifecycles Perform Unit Testing, System Testing for all applications developed / enhancements and ensure that all critical and high-severity bugs are addressed. Subject Matter Expert (SME) in at least one area of Applications Development Align to Engineering Excellence Development principles and standards Promote and increase our Development Productivity scores for coding Fully adhere to and evangelize a full Continuous Integration and Continuous Deploy pipeline Strong SQL skills to extract, analyze and reconcile huge data sets Demonstrate ownership and initiative taking Project will run in iteration lifecycles with agile practices, so experience of agile development and scrums is highly beneficial. Qualifications: Bachelor's degree/University degree or equivalent experience, Master's degree preferred 8-12 year’s experience in application / software development Skills: Prior work Experience in Capital/Regulatory Market or related industry Experience with Big Data technologies (Spark, Hadoop, HDFS, Hive, Impala) Experience with Python/Scala and Unix Shell scripting is a must Excellent analytical, problem solving, negotiating, influencing, facilitation, prioritization, decision-making and conflict resolution skills are required Solid understanding of the Big Data architecture and the ability to trouble shoot development / Performance issues on Hadoop (Cloudera preferably) Strong data analysis skills and the ability to slice and dice the data as needed for business reporting Passionate, self-driven with can do attitude Able to build practical solutions Good Team player, who can work with global team model and deadline oriented The candidate is expected to be dynamic, flexible with a high energy level as this is a demanding and rapidly changing environment. Ability to work independently given general guidance Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 4 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV™ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What you'll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough