The system can take anything from a realistic image of an object that you took with your phone, 2D concept art, or 2D pencil/pen sketches. Doesn't matter if 3D perspective is shown or not, but it's definitely helpful. About multiple images, the algorithm will take an educated guess of the back if only one view is provided but if you provide more views there will be less guessing if that makes sense. Usually, multiple images do make a difference when it comes to more complex objects. For example, if you pass through an animal and its tail is not visible the algorithm will assume something but might not be what you had in mind. Contrary to that, a table is a pretty simple object so multiple pictures won't make a big difference.
What types of objects work best?
At the moment hard surfaced objects work best. Real humans, animals and trees are out of scope for the time being. Having that said, anything stylised or cartoon-ish should work well.
Can we do more than 3 demos?
What we would suggest for doing more testing would be to get our smallest package for 30 generations here. This means that my team will also be attending to any feedback and new features you might be interested in before moving forward.
Do you recommend having different views of the same object for more accurate models? I saw that in thetutorialvideo there was a case with 2 views of a dinosaur.
It depends on the object you want to model. For basic models (i.e. a table) different views won't make a huge difference. For the dinosaur though, if the picture from the side wasn't included the algorithm wouldn't know what the tail should look like.
Should I upload each view as a separate image?
Yes, the different views should be different images.
What is your output file format(s) currently?
We currently export in .obj , .fbx , .glb , .gltf.
How many iterations can I do per model?
As many as you want as long as you have enough credits.
What happens if I interrupt the browser view with the message "Processing..." because I close the tab or my PC battery ran out?
No need to worry about that, the generation won't get affected at all. You can close your browser or visit your asset library on our app or do whatever else you wish while waiting for your asset to complete!
Is iteration a manual process on Kaedim's end?
Indeed, the iteration is a manual process from our side. We came up with the feature from customer feedback as a win-win situation where a) you get any faults fixed b) we get feedback for incorporating into our algorithms as we upgrade them.
How much time does it take for the iteration to be completed?
There is no guarantee as to how long an edit takes - might be 2' might be 30'
How do I add another account to my plan, if I have a plan that supports multiple accounts?
This process isn't automated yet so your team will have to sign up at https://www.app.kaedim3d.com/signup the same way you did and let us know once done, so we can connect your accounts.
Do you offer an education discount?
Unfortunately, we don't currently have an education-specific discount. If you'd like to discuss your projects with someone from our team and enquire about a potential partnership please email email@example.com with some more information.
Do you offer or plan on offering smaller plans for individuals in the future?
Yes, we do! We are aiming to release a consumer-friendly version of our software during 2023.
How can I use Kaedim as a tool for my project?
There are two main use cases for Kaedim in projects involving 3D asset creation. The first is accelerating in-house 3D asset production. By using our AI software you can introduce a minimum 20x speedup for your 3D team. The second use case is integrating 3D UGC in your game, app or metaverse. With our API you can allow end-users to create in 3D by just submitting an image and populate digital worlds with zero modelling experience.
How accurate are the 3D outputs compared to the 2D input image?
The accuracy of the generated 3D model depends on the complexity of the input image and the polycount you set.For simple inputs, you can expect a really high accuracy from input to output. If the input is highly detailed, then the effect of the transition might include losing some of that fine detail. For these cases, you can use the High Detail setting as well as increasing the polycount limit of your generation.
Are there any limitations on what inputs Kaedim can process?
We do offer a trial! There is currently a $6 trial which allows for 3 3D model generations over 3 days (1/day). For accessing our trials, you need to sign-up here
and when you have your account up and running, checkout with the trial.
Can I integrate your software into my app/game/platform/metaverse?
You certainly can integrate Kaedim’s tech into your own app/game/platform/metaverse! We have an API which allows you to do just that. Have a look at the corresponding documentation here and if you need help, don’t hesitate to book a call with our engineers to run you through the process and get you onboarded.
What can I do with Kaedim?
Kaedim is an AI-powered software that enables the conversion from 2D images to 3D models. This allows you to input an image (sketch, art, photo) and convert it into 3D within minutes, all through the touch of a button. Our partners will download and import to their 3D modelling software of choice to polish and finalise before including it in their game or app.
If I were to put a cartoon effect on a photo of me (selfie) would it help with the image not getting dismissed?
Feel free to try it out! Don’t worry about losing your credit as for any submission that is dismissed your credit is returned!
Do you offer any discount?
We don’t currently offer any kind of discounts, but do offer a $6 trial which allows for 3 generations. Apart from that, please don’t hesitate to get in touch with us to see if we can help you in any other way.
How long would it take to set up an Enterprise plan?
The onboarding duration depends on what your Enterprise plan entails and whether we are helping you do integrations as well. For setting up an enterprise account on our web app, please schedule a demo/consultation here.
How long does it take to set up the Kaedim API?
For an expert developer, it shouldn’t take more than a couple of hours. Don’t hesitate to get in touch with us to get some help from our engineers.
Do the 3D models get generated in real dimensions?
All 3D outputs have the correct analogies to their dimensions based on the input images. We are currently working on a feature that will allow the user to input their desired dimensions for the output as well, so stay tuned!
Is it a web-based platform?
Yes! You can access our software through our web app or our API. We also offer ready-made plugins for your favorite software.
What is the max polycount I can request for my 3D model?
You can input whatever polycount limit you wish, however, beyond 150K it won’t make a difference to the fidelity of the output. Please note that the sweet spot of our algorithm is 20K.
Is there a preferred image format for the input?
For the 2D image inputs, we support png. jpg. and jpeg. formats.
What are the best practices when it comes to input images?
Yes, you can single input image! Our AI will make educated guesses for all unseen parts.
What if my input is incorrect?
If your image doesn’t follow the input guidelines listed here, your generation will be dismissed and the credit used will be returned to your account to use for a different input.
Can I upload line-work sketches for converting to 3D models?
You can definitely upload 2D sketches and line-work for turning them into 3D models! Just make sure your sketch illustrates a single object and is clean. You can see some examples of sketch inputs in our Instagram page here.
Can the Kaedim software capture the high detail of the items in my photos?
Our AI software will capture as much detail as possible from your photo. For best results when uploading increased detail inputs please check the “High Detail” checkbox and increase the Upper Limit Polycount.
Are the output 3D models textured?
We do not currently support automatic texturing on the 3D model outputs. This is one of our most requested features so we are planning to release our BETA for automatic texturing by the end of Q1 2023.
Can I edit the 3D model I generated through the Kaedim web app?
Yes, you can edit your generated 3D model through Kaedim! Once you have your 3D model generation, if you feel it requires a change, you can request an Edit. More about the Edit feature here: https://docs.kaedim3d.com/edit-iteration
Can I add colour to the generated 3D models through the Kaedim web app?
Yes! We have a coloring tool integrated into our web app which allows you to select parts of the 3D model, color them and save. You can then download the coloured version by including the mtl. format to your download too. More on the coloring tool here.
Do I own the IP of the 3D models I create with Kaedim?
All the 3D models that you create with Kaedim are your IP, given that you are onboard with one of our commercial plans (Light, Pro, Enterprise).
Does Kaedim automatically generate UVs? Are the models automatically unwrapped and ready for texturing?
No, Kaedim’s AI doesn’t automatically generate the UVs of the models. This is a common request so we’ve put it in our backlog! You currently need to download the generated model from our app, open in your preferred 3D modelling software and do the UV unwrapping by hand.
Is there a specific lens I should prefer if I am using photos as my 2D inputs?
• There is no specific lens that you should be using for your input images, as long as the input guidelines are met. Keep in mind that if the lens distorts the object, the output 3D model will most likely follow the same distortion.
Do you offer free subscriptions?
We don’t currently offer any free subscriptions or trials. The closest is our $6 trial that gives you 3 generations over 3 days (1/day).
How much configurability is there in terms of the settings for the generated 3D model?
There are 3 settings you can change before sending your 2D image for processing: the upper limit polycount, the High Detail option, and the symmetry option. More details on the input settings can be found here.