Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
About Us 6thstreet.com is one of the largest omnichannel fashion & lifestyle destinations in the GCC, home to 1200+ international brands. The fashion-savvy destination offers collections from over 150 international fashion brands such as Dune London, ALDO, Naturalizer, Nine West, Charles & Keith, New Balance, Crocs, Birkenstock, Skechers, Levi's, Aeropostale, Garage, Nike, Adidas Originals, Rituals, and many more. The online fashion platform also provides free delivery, free returns, cash on delivery, and the option for click and collect. Job Description We are looking for a seasoned Data Engineer to design and manage data solutions. Expertise in SQL, Python, and AWS is essential. The role includes client communication, recommending modern data tools, and ensuring smooth data integration and visualization. Strong problem-solving and collaboration skills are crucial. Responsibilities Understand and analyze client business requirements to support data solutions. Recommend suitable modern data stack tools based on client needs. Develop and maintain data pipelines, ETL processes, and data warehousing. Create and optimize data models for client reporting and analytics. Ensure seamless data integration and visualization with cross-functional teams. Communicate with clients for project updates and issue resolution. Stay updated on industry best practices and emerging technologies. Skills Required 3-5 years in data engineering/analytics with a proven track record. Proficient in SQL and Python for data manipulation and analysis. Knowledge of Pyspark is a plus. Experience with data warehouse platforms like Redshift and Google BigQuery. Experience with AWS services like S3, Glue, Athena. Proficient in Airflow. Familiarity with event tracking platforms like GA or Amplitude is a plus. Strong problem-solving skills and adaptability. Excellent communication skills and proactive client engagement. Ability to get things done, unblock yourself, and effectively collaborate with team members and clients. Benefits Full-time role. Competitive salary + bonus. Company employee discounts across all brands. Medical & health insurance. Collaborative work environment. Good vibes work culture. Medical insurance.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The role within Niro Money, Data and Analytics team involves translating data into actionable insights to enhance marketing ROI, drive business growth, and improve customer experience for various financial products such as personal loan, home loan, credit card, and insurance. The successful candidate will possess a strong background in data analytics and be capable of providing strategic recommendations to key stakeholders and business leaders. You will lead, mentor, and develop a high-performing team of data analysts and data scientists focused on building decision science models and segmentations to predict customer behaviors. Collaborating with the Partnership & Marketing team, you will conduct marketing experiments to enhance funnel conversion rates. Additionally, you will evaluate the effectiveness of marketing campaigns, identify successful strategies, recommend necessary changes, and oversee the implementation of customer journey-related product enhancements. Creating a culture of collaboration, innovation, and data-driven decision-making across various teams is crucial. You will manage multiple analytics projects concurrently, prioritizing them based on potential business impacts and ensuring timely and accurate completion. Project planning, monitoring, and addressing challenges promptly to keep projects on track are essential responsibilities. Collaborating with Data Engineering, Technology, and Product teams, you will develop and implement data capabilities for conducting marketing experiments and delivering actionable insights at scale. Applicants should hold a Master's degree in statistics, mathematics, data science, economics, or a BTech in computer science or engineering. A minimum of 5 years of hands-on experience in decision science analytics and developing data-driven strategies, preferably in the financial services industry, is required. You should also have at least 2 years of experience in managing and leading teams of data analysts and data scientists. Proficiency in statistical model development within financial service industries, including the use of logistic regression/gradient boosting algorithms with Python packages like Scikit-learn, XGBoost, Stats models, or decision tree tools, is essential. Moreover, candidates should have a minimum of 2 years of practical experience in SQL and Python. A proven track record of making data-driven decisions and solving problems based on analytics is necessary. Familiarity with Snowflake, AWS Athena/S3, Redshift, BI Tools like AWS Quicksight is advantageous. An analytical mindset, the ability to assess complex scenarios, and make data-driven decisions are essential qualities. A creative and curious nature, willingness to learn new tools and techniques, and a data-oriented personality are desired traits. Excellent communication and interpersonal skills are crucial for effectively collaborating with diverse stakeholders.,
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
At HackerRank, we are on a mission to change the world to value skills over pedigree . We are a high-performing, mission-driven team that truly, madly, deeply cares about what we do. We don’t see velocity and quality as tradeoffs; both matter. If you take pride in high-impact work and thrive in a driven team, HackerRank is where you belong. About The Team Our data team is driven by a clear mission to democratize data within HackerRank, making it accessible and actionable for everyone. Recent Achievements The team recently launched an exports service that has revolutionized performance, achieving a remarkable 10x improvement. This milestone underscores our commitment to delivering impactful, scalable solutions. Collaboration Style Collaboration is at the heart of how we work. We seamlessly balance synchronous and asynchronous methods, enabling us to work cohesively as a team while respecting individual workflows. This approach fosters efficiency and inclusivity in tackling tasks together. About The Role This role is all about building modern, scalable data systems that power real products. You’ll work across the full stack from designing robust data pipelines to supporting search and analytics platforms using tools like Airflow, dbt, and Spark . It’s a hands-on role with real ownership, where you’ll take projects from idea to production. You’ll work with cloud platforms (like AWS or GCP) , open-source tools, and contribute to shaping a flexible, future-ready data foundation. If you enjoy solving complex problems, learning new technologies, and working both independently and with a collaborative team, this role is a great fit. What You’ll Do Evaluate technologies, develop POCs, solve technical challenges and propose innovative solutions for our technical and business problems Delight our stakeholders, customers and partners by building high-quality, well-tested, scalable and reliable business applications. Design, build and maintain streaming and batch data pipelines that can scale. Architect, develop and maintain our Modern lake house Platform using AWS native infrastructure Designing Complex Data Models to deliver insights and enable self-service Take ownership of scaling, performance, security, and reliability of our data infrastructure Hiring, guiding and mentoring junior engineers Work in an agile development environment, participate in code reviews Collaborate with remote development teams and cross-functional teams You Will Thrive In This Role If You love solving tough challenges that create real-world impact and are excited to dive into uncharted territories. You enjoy fast-paced, dynamic environments where collaboration isn’t just encouraged - it’s essential You care about understanding product challenges and finding creative solutions, beyond just coding tasks. You value shipping solutions quickly while refining and enhancing them as you go. You’re willing to break boundaries and contribute wherever needed, even if it’s outside your usual responsibilities. What You Bring 2+ years of experience with designing, developing and maintaining data engineering & BI solutions. Experience with Data Modeling for Big Data Solutions. Experience with Spark, Spark Structured Streaming (Scala Spark) Experience with database technologies like Redshift or Trino Experience querying massive datasets using Languages like SQL, Hive, Spark, and Trino Experience with performance tuning complex data warehouses and queries. Able to solve problems of scale, performance, security, and reliability Self-driven, initiative taker with good communication skills, ability to lead and mentor junior engineers, work with cross-functional teams, and drive architecture decisions. Bonus: Experience with ETL Design & Orchestration using platforms like Apache Airflow, MageAI Want to learn more about HackerRank? Check out HackerRank.com to explore our products, solutions and resources, and dive into our story and mission here. HackerRank is a proud equal employment opportunity and affirmative action employer. We provide equal opportunity to everyone for employment based on individual performance and qualification. We never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. Linkedin |X | Blog | Instagram | Life@HackerRank| Notice To Prospective HackerRank Job Applicants Our Recruiters use @hackerrank.com email addresses. We never ask for payment or credit check information to apply, interview, or work here.
Posted 3 weeks ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview Cvent is a leading meetings, events, and hospitality technology provider with more than 4,800 employees and ~22,000 customers worldwide, including 53% of the Fortune 500. Founded in 1999, Cvent delivers a comprehensive event marketing and management platform for marketers and event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. Our technology brings millions of people together at events around the world. In short, we’re transforming the meetings and events industry through innovative technology that powers human connection. In This Role, You Will Design, develop, and manage databases on the AWS cloud platform Develop and maintain automation scripts or jobs to perform routine database tasks such as provisioning, backups, restores, and data migrations. Build and maintain automated testing frameworks for database changes and upgrades to minimize the risk of introducing errors. Implement self-healing mechanisms to automatically recover from database failures or performance degradation. Integrate database automation tools with CI/CD pipelines to enable continuous delivery and deployment of database changes. Collaborate with cross-functional teams to understand their data requirements and ensure that the databases meet their needs Implement and manage database security policies, including access control, data encryption, and backup and recovery procedures Ensure that database backups and disaster recovery procedures are in place and tested regularly Develop and maintain database documentation, including data dictionaries, data models, and technical specifications Stay up-to-date with the latest cloud technologies and trends and evaluate new tools and products that could improve database performance and scalability. Here's What You Need Bachelor's degree in Computer Science, Information Technology, or a related field Minimum of 3-6 years of experience in designing, building, and administering databases on the AWS cloud platform Strong experience with Infra as Code (CloudFormation/AWS CDK) and automation experience in Python In-depth knowledge of AWS database services such as Amazon RDS, EC2, S3, Amazon Aurora, and Amazon Redshift and Postgres/Mysql/SqlServer Strong understanding of database design principles, data modelling, and normalisation Experience with database migration to AWS cloud platform Strong understanding of database security principles and best practices Excellent troubleshooting and problem-solving skills Ability to work independently and in a team environment Good To Have AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified Database Specialty are a plus.
Posted 3 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
BI Architect About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking an experienced BI Architect with expertise in Databricks, Spotfire (Tableau and Power BI secondary), AWS, and enterprise business intelligence (BI) solutions to design and implement scalable, high-performance BI architectures. This role will focus on data modeling, visualization, governance, self-service BI enablement, and cloud-based BI solutions, ensuring efficient, data-driven decision-making across the organization. The ideal candidate will have strong expertise in BI strategy, data engineering, data warehousing, semantic layer modeling, dashboarding, and performance optimization, working closely with data engineers, business stakeholders, and leadership to drive BI adoption and enterprise analytics excellence. Preferred Candidate would have extensive Spotfire experience followed by Power BI or Tableau. Roles & Responsibilities: Design and develop enterprise BI architectures and implement the architectural vision for TIBCO Spotfire at the enterprise level hosted in AWS Partner with data engineers and architects to ensure optimal data modeling, caching, and query performance in Spotfire Design scalable, secure, and high-performance Spotfire environments, including multi-node server setups and hybrid cloud integrations. Develop reusable frameworks and templates for dashboards, data models, and automation processes. Optimize BI query performance, indexing, partitioning, caching, and report rendering to enhance dashboard responsiveness and data refresh speed. Implement real-time and batch data integration strategies, ensuring smooth data flow from APIs, ERP/CRM systems (SAP, Salesforce, Dynamics 365), cloud storage, and third-party data sources into BI solutions. Establish and enforce BI governance best practices, including data cataloging, metadata management, access control, data lineage tracking, and compliance standards. Troubleshoot interactive dashboards, paginated reports, and embedded analytics solutions that deliver actionable insights. Implement DataOps and CI/CD pipelines for BI, leveraging Deployment Pipelines, Git integration, and Infrastructure as Code (IaC) to enable version control and automation. Stay up to date with emerging BI technologies, cloud analytics trends, and AI/ML-powered BI solutions to drive innovation. Collaborate with business leaders, data analysts, and engineering teams to ensure BI adoption, self-service analytics enablement, and business-aligned KPIs. Provide mentorship and training to BI developers, analysts, and business teams, fostering a data-driven culture across the enterprise. Must-Have Skills: Experience in BI architecture, data analytics, AWS, and enterprise BI solution development Strong expertise in Spotfire including information links, Spotfire Analyst, Spotfire Server, and Spotfire Web Player Hands-on experience with Databricks (Apache Spark, Delta Lake, SQL, PySpark) for data processing, transformation, and analytics. Experience in scripting and extensions Python or R Expertise in BI strategy, KPI standardization, and enterprise data modeling, including dimensional modeling, star schema, and data virtualization. Hands-on experience with cloud BI solutions and enterprise data warehouses, such as Azure Synapse, AWS Redshift, Snowflake, Google BigQuery, or SQL Server Analysis Services (SSAS). Experience with BI governance, access control, metadata management, data lineage, and regulatory compliance frameworks. Expertise in Agile BI development, Scaled Agile (SAFe), DevOps for BI, and CI/CD practices for BI deployments. Ability to collaborate with C-level executives, business units, and engineering teams to drive BI adoption and data-driven decision-making. Good-to-Have Skills: Experience with Tibco Spotfire Lead Discovery Knowledge of AI-powered BI, natural language processing (NLP) in BI, and automated machine learning (AutoML) for analytics. Experience with multi-cloud BI architectures and federated query solutions using Power BI Tableau. Understanding of GraphQL, REST APIs, and data mesh principles for enterprise data access in BI. Knowledge of AI/ML pipeline integration within enterprise data architectures. Education and Professional Certifications Bachelor’s degree with 9-13 years of experience in Computer Science, IT or related field Tibco Spotfire Certifications Power BI Certifications Tableau Certifications Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
I am thrilled to share an exciting opportunity with one of our esteemed clients! 🚀 Join me in exploring new horizons and unlocking potential. If you're ready for a challenge and growth,. Exp: 6-12yrs Location: Hyderabad, Chennai Immediate joiner only, WFO Mandatory skills: SQL, Python, Pyspark, Databricks (strong in core databricks), AWS (AWS is mandate) JD: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Regards R Usha usha@livecjobs.com
Posted 3 weeks ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
What Success Looks Like In This Role Proficient in AWS Services like Datalake, AWS Glue, S3, Stepfunctions, CodePipeline, Quicksight, Redshift, Athena, EC2, EBS, Route 53, SNS, Cloud Formation, Cloud Front, Cloud watch, IAM etc., Setting up monitoring, troubleshooting infrastructure components hosted in various cloud environments and Services primarily on AWS. Experience in creating IaaC deployment pipelines. Experience in IaaC using AWS CloudFormation and Terraform Automation of various processes and tasks using configuration management tools like Ansible, Chef, Puppet, etc. Provide required level of support to customers and respond on reported issues within the agreed SLA s and act as a shift engineer for team handling first level of support. Installing, configuring, and maintaining Test/Dev/Prod environments at on-premises and on Cloud (AWS EC2) Solution, Configuration and implementation of PaaS solutions on AWS / Azure Responsible for Day-to-day operations - Incident, Service request and Change Management Create cloud formation terraform/CloudFormation/ARM templates to create/maintain AWS/Azure infrastructure. Collaborate with teams and be the point of contact for customers and participate in regular cadence. You will be successful in this role if you have: Bachelor of Engineering with over 6 years’ relevant experience OR equivalent combination of education and experience Requires minimum 6+ years of related experience with bachelor’s degree or equivalent experience. Good knowledge on AWS Cloud services and Operations. Collaborate with Sr engineers and leads on Cloud migrations and Operations. Work closely with the Leads or the teams in rectifying and resolving issues in a timely manner in various environments. Knowledge on AWS Identity and Access Management (IAM) Hands-on knowledge on Linux and Windows OS Knowledge of ticketing tools like ServiceNow, JIRA etc. Experience working in 24x7 Support Environments Good knowledge on ITIL/ITSM processes Cloud related certifications preferred Unisys is proud to be an equal opportunity employer that considers all qualified applicants without regard to age, blood type, caste, citizenship, color, disability, family medical history, family status, ethnicity, gender, gender expression, gender identity, genetic information, marital status, national origin, parental status, pregnancy, race, religion, sex, sexual orientation, transgender status, veteran status or any other category protected by law. This commitment includes our efforts to provide for all those who seek to express interest in employment the opportunity to participate without barriers. If you are a US job seeker unable to review the job opportunities herein, or cannot otherwise complete your expression of interest, without additional assistance and would like to discuss a request for reasonable accommodation, please contact our Global Recruiting organization at GlobalRecruiting@unisys.com or alternatively Toll Free: 888-560-1782 (Prompt 4). US job seekers can find more information about Unisys’ EEO commitment here.
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderābād
Remote
About Us: Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here . Overview: We’re looking for a talented technology leader to join our passionate engineering team as a Senior Software Engineer and help us scale and grow our cloud-based systems and technologies with a keen eye towards software quality and operational excellence. As a tech “unicorn” with the headquarters in San Diego, this is a amazing opportunity for the right person to join and guide the technical vision of this pre-IPO software company as we make history in the sales enablement space! As the Senior Software Engineer, you will play a vital part in driving solid cloud architecture and ensure best engineering practices across multiple engineering teams. You, along with your globally dispersed teammates, will collaborate to build micro-service based systems to support multiple different products to support the systems for sharing and collaboration between sales and marketing departments of our customers. You will work closely with our product leads, engineering leads, team and contribute to the best practices for CI/CD. You will also mentor more Jr. engineers and help grow a strong engineering team. This is an opportunity to work as a technical thought-leader and share ideas to build the best-in-class microservice that wows our internal and external stakeholders with its functionality and simplicity. Who you are:: Bachelor's degree in Computer Science, similar technical field of study, or equivalent practical experience. 5+ years of software engineering experience and a passion for building and innovating – you stay up to date with the latest technologies and trends in development. Must have a strong familiarity within .NET Core, and C# or similar object-oriented languages and frameworks. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience with the SCRUM and the AGILE development process. Familiarity and comfortability developing in cloud-based environments (Azure, AWS, Google Cloud, etc.) Optional: Experience with HTML/CSS/JS and modern SPA frameworks (React Vue.js, etc.). Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. Seen as an active contributor in the team problem-solving-process – you aren't afraid to share your opinions in a low-ego manner or roll up your sleeves and write critical path code or refactor a significant piece of code. Deep experience across multiple software projects, driving end-to-end software development lifecycle of an architecturally complex system or product. Ability to think tactically as well as strategically, respecting what came before you and always thinking longer-term. Highly focused on operational excellence and software quality, with experience in CI/CD and best operational practices. Your technical skills are sought after as you develop in a pragmatic and efficient manner. You enjoy solving challenging problems, all while having a blast with equally passionate and talented team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. What you'll be doing:: Develop, improve, and maintain, our microservice and ensure seamless integration to the rest of the Seismic platform. Help grow a new local engineering team while collaborating and driving technical and architectural decisions across multiple remote teams. Collaborate with globally-dispersed product managers, designers, and software engineers to rapidly build, test, and deploy code to create innovative solutions and add values to our customers' experience with Seismic. Explore new technologies and industry trends and bring your findings to life in our products. Participate in and contribute towards code reviews, bug/issue triage, and documentation. Contribute to troubleshooting and continuous quality improvements. Job Posting Footer: If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Linkedin Posting Section: #LI-ST1
Posted 3 weeks ago
5.0 years
4 - 9 Lacs
Hyderābād
On-site
What Will You Do? Build and own ETL data pipelines that will power all our reporting & analytics needs Develop clean, safe, testable and cost-efficient solutions Build fast and reliable pipelines with underlying data model that can scale according to business needs and growth Understand the system you are building, foresee possible shortcomings and be able to resolve or compromise appropriately Mentor Junior engineers in data Quality/pipelines etc. Company Overview Fanatics is building the leading global digital sports platform to ignite and harness the passions of fans and maximize the presence and reach for hundreds of partners globally. Leveraging these long-standing partnerships, a database of more than 80 million global consumers and a trusted, beyond recognizable brand name, Fanatics is expanding its position as the global leader for licensed sports merchandise to now becoming a next-gen digital sports platform, featuring an array of offerings across the sports ecosystem. The Fanatics family of companies currently includes Fanatics Commerce, a vertically-integrated licensed merchandise business that has changed the way fans purchase their favorite team apparel, jerseys, headwear and hardgoods through a tech-infused approach to making and quickly distributing fan gear in today’s 24/7 mobile-first economy; Fanatics Collectibles, a transformative company that is building a new model for the hobby and giving collectors an end-to-end collectibles experience; and Fanatics Betting & Gaming, a mobile betting, gaming and retail Sportsbook platform. all major Fanatics’ partners include professional sports leagues (NFL, MLB, NBA, NHL, NASCAR, MLS, PGA) and hundreds of collegiate and professional teams, which include several of the biggest global soccer clubs. As a market leader with more than 8,000 employees, and hundreds of partners, suppliers, and vendors worldwide, we take responsibility for driving toward more ethical and sustainable practices. We are committed to building an inclusive Fanatics community, reflecting and representing society at every level of the business, including our employees, vendors, partners and fans. Fanatics is also dedicated to making a positive impact in the communities where we all live, work, and play through strategic philanthropic initiatives. At Fanatics, we’re a diverse, passionate group of employees aiming to ignite pride and passion in the fans we outfit, celebrate and support. We recognize that diversity helps drive and foster innovation, and through our IDEA program (inclusion, diversity, equality and advocacy) at Fanatics we provide employees with tools and resources to feel connected and engaged in who they are and what they do to support the ultimate fan experience. Job Requirements Must have 5+ years of experience in Data Engineering field, with a proven track record of exposure in Big Data technologies such as Hadoop, Amazon EMR, Hive, Spark. Expertise in SQL technologies and at least one major Data Warehouse technology (Snowflake, RedShift, BigQuery etc.). Must have experience in building data platform – designing and building data model, integrate data from many sources, build ETL and data-flow pipelines, and support all parts of the data platform. Programming proficiency in Python and Scala, with experience writing modular, reusable, and testable code, including robust error handling and logging in data engineering applications. Hands-on experience with AWS cloud services, particularly in areas such as S3, Lambda, Glue, EC2, RDS, and IAM. Experience with orchestration tools such as Apache Airflow, for scheduling, monitoring, and managing data pipelines in a production environment. Familiarity with CI/CD practices, automated deployment pipelines, and version control systems (e.g., Git, GitHub/GitLab), ensuring reliable and repeatable data engineering workflows. Data Analysis skill – can make arguments with data and proper visualization. Energetic, enthusiastic, detail-oriented, and passionate about producing high-quality analytics deliverable. Must have experience in developing application with high performance and low latency. Ability to take ownership of initiatives and drive them independently from conception to delivery, including post-deployment monitoring and support. Strong communication and interpersonal skills with the ability to build relationships with stakeholders, understand business requirements, and translate them into technical solutions. Comfortable working cross-functionally in a multi-team environment, collaborating with data analysts, product managers, and engineering teams to deliver end-to-end data solutions. Job Description We are seeking a Sr. Data Engineer who has strong design, developments skills and upkeeps scalability, availability and excellence when building the next generation of our data pipelines and platform. You are an expert in various data processing technologies and data stores, appreciate the value of clear communication and collaboration, and devote to continual capacity planning and performance fine-tuning for emerging business growth. As the Senior Data Engineer, you will be mentoring Junior engineers in the team. Good to have: Experience in Web Services, API integration, Data exchanges with third parties is preferred. Experience in Snowflake is a big plus. Experience in NoSQL technologies (MongoDB, FoundationDB, Redis) is a plus. We would appreciate candidates who can demonstrate business-side functional understanding and effectively communicate the business context alongside their technical expertise.
Posted 3 weeks ago
2.0 years
6 - 8 Lacs
Hyderābād
On-site
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with scripting language (e.g., Python, Java, or R) - Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) - Experience applying basic statistical methods (e.g. regression) to difficult business problems - Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports - Track record of generating key business insights and collaborating with stakeholders When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The successful candidate will be able to: - Effectively manage customer expectations and resolve conflicts that balance client and company needs. - Develop process to effectively maintain and disseminate project information to stakeholders. - Be successful in a delivery focused environment and determining the right processes to make the team successful. - This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. - Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. - Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. - Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. - Serve as a role model for Amazon Leadership Principles inside and outside the organization - Actively seek to implement and distribute best practices across the operation Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 3 weeks ago
3.0 years
6 - 8 Lacs
Hyderābād
On-site
DESCRIPTION The Maintenance Automation Platform (MAP) team within the Global Reliability and Maintenance Engineering (RME) Central Team is looking for an exceptional business intelligence engineer to join us. In this role, you will work on an analytical team to bust myths, create insights, and produce recommendations to help Central RME to deliver world class service to the Amazon network. As part of the team will be involved in all phases of research, experiment, design and analysis, including defining research questions, designing experiments, identifying data requirements, and communicating insights and recommendations. You'll also be expected to continuously learn new systems, tools, and industry best practices to analyze big data and enhance our analytics. These are exciting fast-paced businesses in which we get to work on extremely interesting analytical problems, in an environment where you get to learn from other engineers and apply business intelligence to help leadership make informed decisions. Your work focuses on complex and/or ambiguous problem areas in existing or new BI initiatives. You take the long term view of your team's solutions and how they fit into the team’s architecture. You consider where each solution is in its lifecycle and where appropriate, proactively fix architecture deficiencies You understand capabilities and limitations of the systems you work with (e.g. cluster size, concurrent users, data classification). You are able to explain these limitations to technical and non-technical audiences, helping them understand what’s currently possible and which efforts need a technology investment You take ownership of team infrastructure, providing a system-wide view and design guidance. You make things simpler. You drive BI engineering best practices (e.g. Operational Excellence, code reviews, syntax and naming convention, metric definitions, alarms) and set standards. You collaborate with customers and other internal partners to refine the problem into specific deliverables, and you understand the business context well enough to recommend alternatives and anticipate future requests. In addition to stakeholders, you may work with partner teams (business and technical) and Data Engineers/Data Scientists/BA/SDES/other BIEs to design and deliver the right solution. You contribute to the professional development of colleagues, improving their business and technical knowledge and their understanding of BI engineering best practices. Key job responsibilities Own the development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business Partner with operations and business teams to consult, develop and implement KPI’s, automated reporting solutions and infrastructure improvements to meet business needs Develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support business needs Perform both ad-hoc and strategic analyses Strong verbal/written communication and presentation skills, including an ability to effectively communicate with both business and technical teams. BASIC QUALIFICATIONS 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Business Intelligence
Posted 3 weeks ago
8.0 years
30 - 38 Lacs
Gurgaon
Remote
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Python : 3 years (Required) Data Engineering : 5 years (Required) Batch Technologies Hadoop, Hive, Athena, Presto, Spark : 4 years (Required) SQL/Queries : 3 years (Required) AWS Elastic MapReduce (EMR): 4 years (Required) AWS CDK, Cloud-formation, Lambda, Step-function: 3 years (Required) AWS Glue Catalog : 3 years (Required) Work Location: In person
Posted 3 weeks ago
4.0 years
0 Lacs
India
On-site
Job Information Date Opened 07/11/2025 City Saidapet Country India Job Role Data Engineering State/Province Tamil Nadu Industry IT Services Job Type Full time Zip/Postal Code 600096 Job Description Introduction to the Role: Are you passionate about unlocking the power of data to drive innovation and transform business outcomes? Join our cutting-edge Data Engineering team and be a key player in delivering scalable, secure, and high-performing data solutions across the enterprise. As a Data Engineer , you will play a central role in designing and developing modern data pipelines and platforms that support data-driven decision-making and AI-powered products. With a focus on Python , SQL , AWS , PySpark , and Databricks , you'll enable the transformation of raw data into valuable insights by applying engineering best practices in a cloud-first environment. We are looking for a highly motivated professional who can work across teams to build and manage robust, efficient, and secure data ecosystems that support both analytical and operational workloads. Accountabilities: Design, build, and optimize scalable data pipelines using PySpark , Databricks , and SQL on AWS cloud platforms . Collaborate with data analysts, data scientists, and business users to understand data requirements and ensure reliable, high-quality data delivery. Implement batch and streaming data ingestion frameworks from a variety of sources (structured, semi-structured, and unstructured data). Develop reusable, parameterized ETL/ELT components and data ingestion frameworks. Perform data transformation, cleansing, validation, and enrichment using Python and PySpark . Build and maintain data models, data marts, and logical/physical data structures that support BI, analytics, and AI initiatives. Apply best practices in software engineering, version control (Git), code reviews, and agile development processes. Ensure data pipelines are well-tested, monitored, and robust with proper logging and alerting mechanisms. Optimize performance of distributed data processing workflows and large datasets. Leverage AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) for data orchestration and lakehouse architecture design. Participate in data governance practices and ensure compliance with data privacy, security, and quality standards. Contribute to documentation of processes, workflows, metadata, and lineage using tools such as Data Catalogs or Collibra (if applicable). Drive continuous improvement in engineering practices, tools, and automation to increase productivity and delivery quality. Essential Skills / Experience: 4 to 6 years of professional experience in Data Engineering or a related field. Strong programming experience with Python and experience using Python for data wrangling, pipeline automation, and scripting. Deep expertise in writing complex and optimized SQL queries on large-scale datasets. Solid hands-on experience with PySpark and distributed data processing frameworks. Expertise working with Databricks for developing and orchestrating data pipelines. Experience with AWS cloud services such as S3 , Glue , EMR , Athena , Redshift , and Lambda . Practical understanding of ETL/ELT development patterns and data modeling principles (Star/Snowflake schemas). Experience with job orchestration tools like Airflow , Databricks Jobs , or AWS Step Functions . Understanding of data lake, lakehouse, and data warehouse architectures. Familiarity with DevOps and CI/CD tools for code deployment (e.g., Git, Jenkins, GitHub Actions). Strong troubleshooting and performance optimization skills in large-scale data processing environments. Excellent communication and collaboration skills, with the ability to work in cross-functional agile teams. Desirable Skills / Experience: AWS or Databricks certifications (e.g., AWS Certified Data Analytics, Databricks Data Engineer Associate/Professional). Exposure to data observability , monitoring , and alerting frameworks (e.g., Monte Carlo, Datadog, CloudWatch). Experience working in healthcare, life sciences, finance, or another regulated industry. Familiarity with data governance and compliance standards (GDPR, HIPAA, etc.). Knowledge of modern data architectures (Data Mesh, Data Fabric). Exposure to streaming data tools like Kafka, Kinesis, or Spark Structured Streaming. Experience with data visualization tools such as Power BI, Tableau, or QuickSight. Work Environment & Collaboration: We value a hybrid, collaborative environment that encourages shared learning and innovation. You will work closely with product owners, architects, analysts, and data scientists across geographies to solve real-world business problems using cutting-edge technologies and methodologies. We encourage flexibility while maintaining a strong in-office presence for better team synergy and innovation. About Agilisium - Agilisium, is an AWS technology Advanced Consulting Partner that enables companies to accelerate their "Data-to-Insights-Leap. With $25+ million in annual revenue and over 40% year-over-year growth, Agilisium is one of the fastest-growing IT solution providers in Southern California. Our most important asset? People. Talent management plays a vital role in our business strategy. We’re looking for “drivers”; big thinkers with growth and strategic mindset — people who are committed to customer obsession, aren’t afraid to experiment with new ideas. And we are all about finding and nurturing individuals who are ready to do great work. At Agilisium, you’ll collaborate with great minds while being challenged to meet and exceed your potential
Posted 3 weeks ago
4.0 - 6.0 years
4 - 6 Lacs
Chennai
On-site
Role AWS Data Engineer Job Summary We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud based ETL pipelines using AWS EMR Glue and Python PySpark along with data analytics expertise in Amazon Athena Redshift The ideal candidate will be responsible for designing developing and maintaining scalable data solutions in a cloud native environment Responsibilities Design and implement ETL workflows using AWS EMR Glue Python and PySpark Develop and optimize queries using Amazon Athena and Redshift Build scalable data pipelines to ingest transform and load data from various sources Ensure data quality integrity and security across AWS services Collaborate with data analysts data scientists and business stakeholders to deliver data solutions Monitor and troubleshoot ETL jobs and cloud infrastructure performance Automate data workflows and integrate with CI CD pipelines Required Skills and Qualifications Hands on experience with AWS EMR Glue and Athena Redshift Strong programming skills in Python with Pandas numpy PySpark and SQL Experience with ETL design implementation and optimization Familiarity with S3 Lambda CloudWatch and other AWS services Experience with schema design partitioning and query optimization in Athena Proficiency in version control Git and agile development practices Work Location Chennai Bangalore Hyderabad Tier Level 3 4 Experience 4 6 years About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 3 weeks ago
4.0 years
4 - 6 Lacs
Chennai
On-site
Job Title: Data Engineer – C11/Officer (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. - Job Family Group: Technology - Job Family: Digital Software Engineering - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 3 weeks ago
3.0 years
3 - 10 Lacs
Chennai
On-site
DESCRIPTION Amazon is looking for a data-savvy professional to create, report on, and monitor business and operations metrics. Amazon has a culture of data-driven decision-making, and demands business intelligence that is timely, accurate, and actionable. This role will help scope, influence, and evaluate process improvements, and will contribute to Amazon’s success by enabling data-driven decision making that will impact the customer experience. Key job responsibilities You love working with data, can create clear and effective reports and data visualizations, and can partner with customers to answer key business questions. You will also have the opportunity to display your skills in the following areas: Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Analyze the current testing processes and identify improvement opportunities with define requirements and work with technical teams and managers to integrate into their development schedules. Demonstrate good judgment in solving problems as well as identifying problems in advance, and proposing solutions. Derive actionable insights and present recommendations from your analyses to partner teams and organizational leadership. Translate technical testing results into business-friendly reports Have a strong desire to dive deep and demonstrate the ability to do it effectively. Share your expertise - partner with and empower product teams to perform their own analyses. Produce high-quality documentation for processes and analysis results. A day in the life We are looking for a Business Analyst to join our team. This person will be a creative problem solver who cares deeply about what our customers experience, and is a highly analytical, team-oriented individual with excellent communication skills. In this highly visible role, you will provide reporting, analyze data, make sense of the results and be able to explain what it all means to key stakeholders, such as Front Line Managers (FLMs), QA Engineers and Project Managers. You are a self-starter while being a reliable teammate, you are comfortable with ambiguity in a fast-paced and ever-changing environment, you are able to see the big picture while paying meticulous attention to detail, you know what it takes to build trust, you are curious and thrive on learning. You will become a subject matter expert in the Device OS world. About the team The Amazon Devices group delivers delightfully unique Amazon experiences, giving customers instant access to everything, digital or physical. The Device OS team plays a central role in creating these innovative devices at Lab126. The Device OS team is responsible for the board bring up, low level software, core operating system architecture, innovative framework feature development, associated cloud services and end-to-end system functions that brings these devices to life. The software built by the Device OS team runs on all Amazon consumer electronics devices. BASIC QUALIFICATIONS 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience 2+ years of complex Excel VBA macros writing experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL PREFERRED QUALIFICATIONS Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in Amazon Redshift and other AWS technologies Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TN, Chennai Project/Program/Product Management-Non-Tech
Posted 3 weeks ago
3.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We’re seeking a skilled Data Scientist with expertise in SQL, Python, AWS SageMaker , and Commercial Analytics to contribute to Team. You’ll design predictive models, uncover actionable insights, and deploy scalable solutions to recommend optimal customer interactions. This role is ideal for a problem-solver passionate about turning data into strategic value. Key Responsibilities Model Development: Build, validate, and deploy machine learning models (e.g., recommendation engines, propensity models) using Python and AWS SageMaker to drive next-best-action decisions. Data Pipeline Design: Develop efficient SQL queries and ETL pipelines to process large-scale commercial datasets (e.g., customer behavior, transactional data). Commercial Analytics: Analyze customer segmentation, lifetime value (CLV), and campaign performance to identify high-impact NBA opportunities. Cross-functional Collaboration: Partner with marketing, sales, and product teams to align models with business objectives and operational workflows. Cloud Integration: Optimize model deployment on AWS, ensuring scalability, monitoring, and performance tuning. Insight Communication: Translate technical outcomes into actionable recommendations for non-technical stakeholders through visualizations and presentations. Continuous Improvement: Stay updated on advancements in AI/ML, cloud technologies, and commercial analytics trends. Qualifications Education: Bachelor’s/Master’s in Data Science, Computer Science, Statistics, or a related field. Experience: 3-4 years in data science, with a focus on commercial/customer analytics (e.g., pharma, retail, healthcare, e-commerce, or B2B sectors). Technical Skills: Proficiency in SQL (complex queries, optimization) and Python (Pandas, NumPy, Scikit-learn). Hands-on experience with AWS SageMaker (model training, deployment) and cloud services (S3, Lambda, EC2). Familiarity with ML frameworks (XGBoost, TensorFlow/PyTorch) and A/B testing methodologies. Analytical Mindset: Strong problem-solving skills with the ability to derive insights from ambiguous data. Communication: Ability to articulate technical concepts to business stakeholders. Preferred Qualifications AWS Certified Machine Learning Specialty or similar certifications. Experience with big data tools (Spark, Redshift) or ML Ops practices. Knowledge of NLP, reinforcement learning, or real-time recommendation systems. Exposure to BI tools (Tableau, Power BI) for dashboarding.
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
chandigarh
On-site
As a 3D Modeling Graphic Designer at Mother Sparsh, you will play a crucial role in creating high-quality 3D models and photorealistic renderings for a variety of products. Your strong background in 3D modeling, rendering, and product visualization will be instrumental in transforming ideas into stunning visuals that meet user needs and elevate our brand. Collaborating closely with POCs from the Creative Department, Performance Marketing, and E-Commerce Department, you will refine designs to reflect brand standards and ensure consistency across all visuals. Your expertise in software such as Blender, Maya, 3ds Max, or Cinema 4D, along with experience in rendering engines like Arnold, V-Ray, Octane, Redshift, or Cycles, will enable you to produce photorealistic visualizations for print, digital platforms, and marketing collateral. With your creative vision and problem-solving skills, you will support and execute designs for events or experiential projects, while maintaining meticulous attention to detail throughout the design process. Staying updated on industry trends and emerging technologies, you will bring fresh, innovative ideas to the table and manage multiple projects simultaneously, adhering to deadlines and ensuring on-time delivery. To excel in this role, you should have 3+ years of relevant experience in 3D product visualization, along with excellent verbal and written communication skills to present and explain design concepts to higher management. If you are passionate about creating innovative designs and have a knack for visual storytelling, we welcome you to apply for this position based in Chandigarh.,
Posted 3 weeks ago
15.0 - 21.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Architect with over 15 years of experience, your primary responsibility will be to lead the design and implementation of scalable, secure, and high-performing data architectures. You will collaborate with business, engineering, and product teams to develop robust data solutions that support business intelligence, analytics, and AI initiatives. Your key responsibilities will include designing and implementing enterprise-grade data architectures using cloud platforms such as AWS, Azure, or GCP. You will lead the definition of data architecture standards, guidelines, and best practices while architecting scalable data solutions like data lakes, data warehouses, and real-time streaming platforms. Collaborating with data engineers, analysts, and data scientists, you will ensure optimal solutions are delivered based on data requirements. In addition, you will oversee data modeling activities encompassing conceptual, logical, and physical data models. It will be your duty to ensure data security, privacy, and compliance with relevant regulations like GDPR and HIPAA. Defining and implementing data governance strategies alongside stakeholders and evaluating data-related tools and technologies are also integral parts of your role. To excel in this position, you should possess at least 15 years of experience in data architecture, data engineering, or database development. Strong experience in architecting data solutions on major cloud platforms like AWS, Azure, or GCP is essential. Proficiency in data management principles, data modeling, ETL/ELT pipelines, and modern data platforms/tools such as Snowflake, Databricks, and Apache Spark is required. Familiarity with programming languages like Python, SQL, or Java, as well as real-time data processing frameworks like Kafka, Kinesis, or Azure Event Hub, will be beneficial. Moreover, experience in implementing data governance, data cataloging, and data quality frameworks is important. Knowledge of DevOps practices, CI/CD pipelines for data, and Infrastructure as Code (IaC) is a plus. Excellent problem-solving, communication, and stakeholder management skills are necessary for this role. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is preferred, along with certifications like Cloud Architect or Data Architect (AWS/Azure/GCP). Join us at Infogain, a human-centered digital platform and software engineering company, where you will have the opportunity to work on cutting-edge data and AI projects in a collaborative and inclusive work environment. Experience competitive compensation and benefits while contributing to experience-led transformation for our clients in various industries.,
Posted 3 weeks ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. If you are a software engineering leader ready to take the reins and drive impact, we’ve got an opportunity just for you. As a Director of Software Engineering at JPMorgan Chase within the Commercial & Investment Banking, you will lead a technical domain, managing teams, technologies, and projects across multiple departments. Your deep expertise in software, applications, technical processes, and product management will be vital in managing numerous complex projects and initiatives, serving as the key decision maker for your teams while promoting innovation and effective solution delivery. With over 15 years of experience in developing Risk Management Systems for the Markets business, you have proven success in leading large teams and collaborating effectively with diverse internal stakeholders. Job Responsibilities Lead and manage large software development teams, fostering a culture of innovation, collaboration, and excellence Provide strategic direction and mentorship to team members, ensuring alignment with organizational goals. Establish and maintain effective partnerships with internal stakeholders across various lines of business. Oversee the planning, execution, and delivery of complex software projects, ensuring they meet business requirements, timelines, and budget constraints. Implement best practices in project management and software development methodologies. Leverage extensive technical expertise in Python, AWS cloud platforms to steer the creation of scalable and robust software solutions. Possess a strong understanding of data architectures to ensure optimal design and implementation. Work with Apache Spark, EMR, Glue, and Parquet to process and analyze large datasets, promoting insights and decision-making through advanced data analytics. Optimize Redshift-based data pipelines for performance, managing over 100+ TB of data, and ensuring efficient data processing and retrieval Handle petabytes of data in data lake and data lakehouse environments like Iceberg, implementing best practices for data management, storage, and retrieval. Leverage experience with JPMorgan's in-house risk management system, Athena, across various lines of business in markets such as Market business. Demonstrate a comprehensive understanding of the trade lifecycle, risk management, and financial products in investment banking, including options, swaps, equities, bonds, and repo. Collaborate with cross-functional teams to ensure seamless integration and functionality of software solutions. Promote innovation by staying abreast of emerging technologies and industry trends. Develop and implement strategies that enhance the efficiency and effectiveness of software development processes. Champion Agile methodologies to enhance project delivery, team collaboration, and continuous improvement. Facilitate Agile ceremonies and ensure adherence to Agile principles and practices. Required Qualifications, Capabilities And Skills Formal training or certification on large scale technology program concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise Proven track record of leading large global teams (50+ developers). Technical Skills: Expertise in Python, AWS cloud platform (S3, Lambda, Kinesis, EC2, DynamoDB, EventBridge, MKS), big data technologies, and data lake/lakehouse architectures (Iceberg). Architect level expertise in data solutions. Modern skills in Cloud and AI. Risk Management System: Extensive experience with risk management system across various lines of business. Financial Products Knowledge: Strong understanding of trade lifecycle, risk management, trade model, trade booking and financial products in investment banking. Leadership Skills: Exceptional leadership, communication, and interpersonal skills. Ability to inspire and motivate teams to achieve high performance. Preferred Qualifications, Capabilities And Skills Experience in JPMorgan's Athena RMS is preferred.
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Job Key Result Areas And Activities : Design and Development : Design, implement, and manage Redshift clusters for high availability, performance, and security. Performance Optimization : Monitor and optimize database performance, including query tuning and resource management. Backup and Recovery : Develop and maintain database backup and recovery strategies. Security Enforcement : Implement and enforce database security policies and procedures. Cost-Performance Balance : Ensure an optimal balance between cost and performance. Collaboration with Development Teams : Work with development teams to design and optimize database schemas and queries. Perform database migrations, upgrades, and patching. Issue Resolution : Troubleshoot and resolve database-related issues, providing support to development and operations teams. Automate routine database tasks using scripting languages and tools. Must-Have Strong understanding of database design, performance tuning, and optimization techniques Proficiency in SQL and experience with database scripting languages (e.g., Python, Shell) Experience with database backup and recovery, security, and high availability solutions Familiarity with AWS services and tools, including S3, EC2, IAM, and CloudWatch Operating System - Any flavor of Linux, Windows Core Redshift Administration Skills -Cluster Management, Performance Optimization, workload management (WLM), vacuuming/analyzing tables for optimal performance, IAM policies, role[1]based access control, Backup & Recovery, automated backups, and restoration strategies. SQL Query Optimization, distribution keys, sort keys, and compression encoding Knowledge of COPY and UNLOAD commands, S3 integration, and best practices for bulk data loading Scripting & Automation for automating routine DBA tasks Expertise in debugging slow queries, troubleshooting system tables (ref:hirist.tech)
Posted 3 weeks ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We are looking for a skilled and motivated Software Engineer with strong experience in data engineering and ETL processes. The ideal candidate should be comfortable working with any object-oriented programming language, possess strong SQL skills, and have hands-on experience with AWS services like S3 and Redshift. Experience in Ruby and working knowledge of Linux are a plus. Key Responsibilities Design, build, and maintain robust ETL pipelines to handle large volumes of data. Work closely with cross-functional teams to gather data requirements and deliver scalable solutions. Write clean, maintainable, and efficient code using object-oriented programming and SOLID principles. Optimize SQL queries and data models for performance and reliability. Use AWS services (S3, Redshift, etc.) to develop and deploy data solutions. Troubleshoot issues in data pipelines and perform root cause analysis. Collaborate with DevOps/infra teams for deployment, monitoring, and scaling data Skills : 6+ years of experience in Data Engineering. Programming : Proficiency in any object-oriented language (e.g., Java, Python, etc.) Bonus: Experience in Ruby is a big plus. SQL : Moderate to advanced skills in writing complex queries and handling data transformations. AWS : Must have hands-on experience with services like S3 and Redshift. Linux : Familiarity with Linux-based systems is good to Qualifications : Experience working in a data/ETL-focused role. Familiarity with version control systems like Git. Understanding of data warehouse concepts and performance tuning. (ref:hirist.tech)
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for designing, developing, and implementing data-centric software solutions using various technologies. This includes conducting code reviews, recommending best coding practices, and providing effort estimates for the proposed solutions. Additionally, you will design audit business-centric software solutions and maintain comprehensive documentation for all proposed solutions. As a key member of the team, you will lead architect and design efforts for product development and application development for relevant use cases. You will provide guidance and support to team members and clients, implementing best practices of data engineering and architectural solution design, development, testing, and documentation. Your role will require you to participate in team meetings, brainstorming sessions, and project planning activities. It is essential to stay up-to-date with the latest advancements in the data engineering area to drive innovation and maintain a competitive edge. You will stay hands-on with the design, development, and validation of systems and models deployed. Collaboration with audit professionals to understand business, regulatory, and risk requirements, as well as key alignment considerations for audit, is a crucial aspect of the role. Driving efforts in the data engineering and architecture practice area will be a key responsibility. In terms of mandatory technical and functional skills, you should have a deep understanding of RDBMS (MS SQL Server, ORACLE, etc.), strong programming skills in T-SQL, and proven experience in ETL and reporting (MSBI stack/COGNOS/INFORMATICA, etc.). Additionally, experience with cloud-centric databases (AZURE SQL/AWS RDS), ADF (AZURE Data Factory), data warehousing skills using SYNAPSE/Redshift, understanding and implementation experience of datalakes, and experience in large data processing/ingestion using Databricks APIs, Lakehouse, etc., are required. Knowledge in MPP databases like SnowFlake/Postgres-XL is also essential. Preferred technical and functional skills include understanding financial accounting, experience with NoSQL using MONGODB/COSMOS, Python coding experience, and an aptitude towards emerging data platforms technologies like MS AZURE Fabric. Key behavioral attributes required for this role include strong analytical, problem-solving, and critical-thinking skills, excellent collaboration skills, the ability to work effectively in a team-oriented environment, excellent written and verbal communication skills, and the willingness to learn new technologies and work on them.,
Posted 3 weeks ago
1.0 - 31.0 years
1 - 2 Lacs
Jaipur
On-site
🧠 About the Role We are seeking a proactive and detail-oriented Apache Superset & SQL Expert with 1+ years of experience in the healthcare domain. You’ll be responsible for building insightful BI dashboards and maintaining complex data pipelines to support mission-critical analytics for healthcare operations and compliance reporting. ✅ Key Responsibilities Develop and maintain advanced Apache Superset dashboards tailored for healthcare KPIs and operational metrics Write, optimise, and maintain complex SQL queries to extract and transform data from multiple healthcare systems Collaborate with data engineering and clinical teams to define and model datasets for visualisation Ensure dashboards comply with healthcare data governance, privacy (e.g., HIPAA), and audit requirements Monitor performance, implement row-level security, and maintain a robust Superset configuration Translate clinical and operational requirements into meaningful visual stories 🧰 Required Skills & Experience 1+ years of domain experience in healthcare analytics or working with healthcare datasets (EHR, claims, patient outcomes, etc.) 3+ years of experience working with Apache Superset in a production environment Strong command over SQL, including query tuning, joins, aggregations, and complex transformations Hands-on experience with data modelling and relational database design Solid understanding of clinical terminology, healthcare KPIs, and reporting workflows Experience in working with PostgreSQL, MySQL, or other SQL-based databases Strong documentation, communication, and stakeholder-facing skills 🌟 Nice-to-Have Familiarity with HIPAA, HL7/FHIR data structures, or other regulatory standards Experience with Python, Flask, or Superset plugin development Exposure to modern healthcare data platforms, dbt, or Airflow Experience integrating Superset with EMR, clinical data lakes, or warehouse systems like Redshift or BigQuery
Posted 3 weeks ago
7.0 - 12.0 years
0 Lacs
maharashtra
On-site
As a Lead Data Engineer, you will be responsible for leveraging your 7 to 12+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud (AWS), and Data Governance domains. Your expertise in a modern programming language such as Scala, Python, or Java, with a preference for Spark/ Pyspark, will be crucial in this role. Your role will require you to have experience with configuration management and version control apps like Git, along with familiarity working within a CI/CD framework. If you have experience in building frameworks, it will be considered a significant advantage. A minimum of 8 years of recent hands-on SQL programming experience in a Big Data environment is necessary, with a preference for experience in Hadoop/Hive. Proficiency in PostgreSQL, RDBMS, NoSQL, and columnar databases will be beneficial for this role. Your hands-on experience in AWS Cloud data engineering components, including API Gateway, Glue, IoT Core, EKS, ECS, S3, RDS, Redshift, and EMR, will play a vital role in developing and maintaining ETL applications and data pipelines using big data technologies. Experience with Apache Kafka, Spark, and Airflow is a must-have for this position. If you are excited about this opportunity and possess the required skills and experience, please share your CV with us at omkar@hrworksindia.com. We look forward to potentially welcoming you to our team. Regards, Omkar,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France