About The Company: At ReKnew, our mission is to empower enterprises to revitalize their core business and organization by positioning themselves for the new world of AI. We're a startup founded by seasoned practitioners, supported by expert advisors, and built on decades of experience in enterprise technology, data, analytics, AI, digital, and automation across diverse industries. We're actively seeking top talent to join us in this mission. What You'll Do: As a DevOps Engineer, you will focus on building and maintaining robust, scalable, and secure continuous delivery platforms. This hands-on role involves designing and implementing infrastructure and application solutions across hybrid cloud environments, with a strong emphasis on automation, reliability, and continuous improvement. You will contribute to the deployment and operational excellence of critical applications, including those within the big data and AI/ML ecosystems. Key Responsibilities: Design, implement, and manage CI/CD pipelines for automated code deployments. Provision and manage cloud infrastructure using Infrastructure as Code (IaC) tools like Terraform and CloudFormation. Administer and optimize container orchestration platforms, primarily Kubernetes and EKS. Develop automation scripts and tools using Python. Implement comprehensive monitoring, logging, and alerting solutions for application and infrastructure health. Ensure security best practices are integrated throughout the development and deployment lifecycle. Support and optimize environments for big data and AI/ML workloads, including Airflow. Qualifications: Solid experience with Kubernetes and EKS. Minimum of 4+ years of professional experience in a DevOps role. Proficiency in Docker and containerization best practices. Hands-on experience with IaC tools (Terraform, CloudFormation). Strong programming skills in Python for automation. Demonstrated experience in implementing robust monitoring, alerting, and logging solutions. Familiarity with CI/CD principles and tools for code deployments. Understanding of security principles in cloud environments. Experience with or knowledge of the big data ecosystem and AI/ML infrastructure. Experience with workflow orchestration tools, specifically Airflow. Exposure to or understanding of all three major cloud platforms (AWS, Azure, GCP). Who You Are: A passionate problem-solver with a strong bias for automation. Committed to building reliable, scalable, and secure systems. An excellent communicator with a collaborative mindset. Proactive in identifying and resolving operational challenges. Show more Show less
Company Overview At ReKnew, our mission is to empower enterprises to revitalize their core business and organization by positioning themselves for the new world of AI. We're a startup founded by seasoned practitioners, supported by expert advisors, and built on decades of experience in enterprise technology, data, analytics, AI, digital, and automation across diverse industries. We're actively seeking top talent to join us in this mission. Job Description We're seeking a highly skilled Senior Data Engineer with deep expertise in AWS-based data solutions. In this role, you'll be responsible for designing, building, and optimizing large-scale data pipelines and frameworks that power analytics and machine learning workloads. You'll lead the modernization of legacy systems by migrating workloads from platforms like Teradata to AWS-native big data environments such as EMR, Glue, and Redshift. A strong emphasis is placed on reusability, automation, observability, performance optimization, and managing schema evolution in dynamic data lake environments . Key Responsibilities Migration & Modernization: Build reusable accelerators and frameworks to migrate data from legacy platforms (e.g., Teradata) to AWS-native architectures such as EMR, Glue, and Redshift. Data Pipeline Development: Design and implement robust ETL/ELT pipelines using Python, PySpark, and SQL on AWS big data platforms. Code Quality & Testing: Drive development standards with test-driven development (TDD), unit testing, and automated validation of data pipelines. Monitoring & Observability: Build operational tooling and dashboards for pipeline observability, including tracking key metrics like latency, throughput, data quality, and cost. Cloud-Native Engineering: Architect scalable, secure data workflows using AWS services such as Glue, Lambda, Step Functions, S3, and Athena. Collaboration: Partner with internal product teams, data scientists, and external stakeholders to clarify requirements and drive solutions aligned with business goals. Architecture & Integration: Work with enterprise architects to evolve data architecture while securely integrating AWS systems with on-premise or hybrid environments. This includes strategic adoption of data lake table formats like Delta Lake, Apache Iceberg, or Apache Hudi for schema management and ACID capabilities. ML Support & Experimentation: Enable data scientists to operationalize machine learning models by providing clean, well-governed datasets at scale. Documentation & Enablement: Document solutions thoroughly and provide technical guidance and knowledge sharing to internal engineering teams. Team Training & Mentoring: Act as a subject matter expert, providing guidance, training, and mentorship to junior and mid-level data engineers, fostering a culture of continuous learning and best practices within the team. Qualifications Experience: 7+ years in technology roles, with at least 5+ years specifically in data engineering, software development, and distributed systems. Programming: Expert in Python and PySpark (Scala is a plus). Deep understanding of software engineering best practices. AWS Expertise: 3+ years of hands-on experience in the AWS data ecosystem. Proficient in AWS Glue, S3, Redshift, EMR, Athena, Step Functions, and Lambda. Experience with AWS Lake Formation and data cataloging tools is a plus. AWS Data Analytics or Solutions Architect certification is a strong plus. Big Data & MPP Systems: Strong grasp of distributed data processing. Experience with MPP data warehouses like Redshift, Snowflake, or Databricks on AWS. Hands-on experience with Delta Lake, Apache Iceberg, or Apache Hudi for building reliable data lakes with schema evolution, ACID transactions, and time travel capabilities. DevOps & Tooling: Experience with version control (e.g., GitHub/CodeCommit) and CI/CD tools (e.g., CodePipeline, Jenkins). Familiarity with containerization and deployment in Kubernetes or ECS. Data Quality & Governance: Experience with data profiling, data lineage, and relevant tools. Understanding of metadata management and data security best practices. Bonus: Experience supporting machine learning or data science workflows. Familiarity with BI tools such as QuickSight, PowerBI, or Tableau. Show more Show less
We’re looking for a Marketing Associate who’s ready to learn fast, think sharp, and own execution. What you'll do? Manage ReKnew’s content calendar and social presence (with a focus on LinkedIn) Support campaign execution, partnerships, and community engagement Research trends, competitors, and GTM signals — turn insights into action Assist in event promotion and thought leadership amplification Track campaign performance with simple reporting What you bring? Minimum of 3 years in marketing, communications, or related field Bachelor's degree in relevant field Strong writing and storytelling skills Curiosity about AI, GTM, and B2B SaaS Creative mindset, detail-driven execution Familiarity with tools like LinkedIn Campaign Manager, Canva, HubSpot, and other leading CRM and Marketing tools Know how to use essential tools like ChatGPT, Perplexity, Claude, and other LLM tools. Why join ReKnew? This isn’t a back-office role. You’ll be in the room where strategy meets execution. You’ll experiment, ship campaigns, and see the direct impact of your work. Most importantly, you’ll grow with a team that’s building a new category in GTM.
ReKnew Overview We are ReKnew! Our mission is to help Enterprises revitalize their core business and organization by positioning themselves for the new world of AI. We believe that AI and data are not just a technological shift but a fundamental business transformation. We are a startup founded by practitioners with decades of experience in enterprise technology, data, analytics, AI, digital, and automation. We are surrounded by a world-class advisory board and are dedicated to building a company that values expertise, innovation, and direct impact. Description Join our dynamic engineering team to build the next generation of Data, AI, and Agentic applications that will drive significant value for our company and our clients and partners. We are looking for an experienced Python Application Engineer who thrives in a fast-paced environment and is deeply familiar with the modern Python ecosystem for building robust, scalable, and intelligent applications and services. This role is ideal for professionals looking to blend their technical depth in modern Python with AI/data engineering. Key Responsibilities Design, develop, and deploy high-performance, maintainable Python-based applications and microservices / APIs for our Data & AI/Agentic application suite. Work closely with data engineers, data scientists and AI engineers to productionize sophisticated applications and intelligent agent workflows. Implement robust data validation and serialization using frameworks like Pydantic to ensure data integrity across our systems. Build and maintain reliable Applications and REST APIs using the FastAPI framework, ensuring low-latency and high-throughput web server performance. Containerize and orchestrate services using Docker for consistent development, testing, and production environments. Implement professional-grade logging and monitoring to ensure operational visibility and fast troubleshooting. Develop database interaction layers using SQLAlchemy (ORM) for efficient and secure data persistence. Collaborate on architectural decisions, documentation, code reviews, and setting best practices for Python development. Participate in demos, presentations and training others as needed Required Skills & Attributes for Success We are seeking a candidate with 3+ years of professional experience in a similar role who possesses deep expertise in: Modern Python and object-oriented programming best practices. Pydantic for data modelling, validation, and settings management. Web Servers & HTTP: Strong understanding of HTTP protocols, asynchronous programming, and building high-performance services with frameworks like FastAPI . API/Microservices Development: Proven track record building and documenting resilient REST ful APIs. Database ORM: Extensive experience with SQLAlchemy for efficient database operations and schema management. Containerization: Proficient in creating and managing applications using Docker . Observability: Implementing effective structured logging and basic monitoring strategies. Templating: Familiarity with templating engines like Jinja (or similar) for minimal front-end or internal tooling needs. Dev Tools: Expertise in managing source code in Git / GitHub , building code using VSCode and developing tests using PyTest Preferred Skills Experience in any of the following areas will be a significant advantage: AI/Agentic Frameworks: Hands-on experience with modern Python-based AI orchestration and development tools like LangChain or LangGraph . Data Manipulation: Expertise in large-scale data wrangling and analysis using Pandas or Polars . Databases: Practical experience with relational database systems such as PostgreSQL or working with file-based systems like SQLite . NoSQL: Practical experience building AI-native data applications on vector databases or graph databases . Cloud/DevOps: Familiarity with CI/CD pipelines and deployment to major cloud platforms (AWS, GCP, or Azure). AI Coding Assistants: Hands-on experience developing Python applications aided by one of the AI assistants: GitHub Co-Pilot, Cursor, any major CLI, etc. Salary Range: Depends on the Exp. What We Offer Professional Growth • Opportunity to lead cutting-edge AI transformation at global financial institution • Access to latest AI development technologies and platforms • Collaboration with industry-leading AI researchers and technology vendors • Professional development budget for advanced AI/ML training and certification Compensation & Benefits • Competitive base salary with performance-based bonus structure • Stock options and long-term incentive compensation • Open to both full-time employees and experienced consultants/contractors Work Environment • Fast-paced, innovation-focused environment with strong investment in emerging technologies • Collaborative culture with emphasis on continuous learning and professional development • Global organization with opportunities for international exposure and career advancement • Strong commitment to diversity, equity, inclusion, and employee well-being If you're passionate about building intelligent data and AI applications that enable Enterprise-wide transformation, we’d love to hear from you. To be considered, we are looking for candidates with GitHub repositories and/or contributions to open-source projects in addition to relevant experience.